Test Report: KVM_Linux_crio 18998

                    
                      e8d3a518ce9b98b9e9fc9f8b62f75f3019a13e07:2024-07-04:35167
                    
                

Test fail (31/312)

Order failed test Duration
30 TestAddons/parallel/Ingress 155.76
32 TestAddons/parallel/MetricsServer 342.31
45 TestAddons/StoppedEnableDisable 154.3
164 TestMultiControlPlane/serial/StopSecondaryNode 141.72
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.83
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.32
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 375.65
171 TestMultiControlPlane/serial/StopCluster 141.99
231 TestMultiNode/serial/RestartKeepsNodes 305.78
233 TestMultiNode/serial/StopMultiNode 144.9
240 TestPreload 331.83
248 TestKubernetesUpgrade 439.39
271 TestPause/serial/SecondStartNoReconfiguration 67.18
285 TestStartStop/group/old-k8s-version/serial/FirstStart 296.86
294 TestStartStop/group/no-preload/serial/Stop 138.97
297 TestStartStop/group/embed-certs/serial/Stop 138.99
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 106.87
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/old-k8s-version/serial/SecondStart 736.28
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.39
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.46
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.53
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.58
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 451.44
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 479.97
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 273.59
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 117.03
x
+
TestAddons/parallel/Ingress (155.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-224553 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-224553 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-224553 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [899aaab3-f1d8-46f2-ae17-b22a85faa208] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [899aaab3-f1d8-46f2-ae17-b22a85faa208] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004436282s
I0703 22:51:50.602277   16574 kapi.go:184] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-224553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.736864331s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-224553 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.226
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-224553 addons disable ingress --alsologtostderr -v=1: (7.746835327s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-224553 -n addons-224553
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-224553 logs -n 25: (1.372780531s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | -p download-only-240360                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-240360                                                                     | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-666511                                                                     | download-only-666511 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-240360                                                                     | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-921043 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | binary-mirror-921043                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39145                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-921043                                                                     | binary-mirror-921043 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-224553 --wait=true                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | -p addons-224553                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-224553 ssh cat                                                                       | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | /opt/local-path-provisioner/pvc-3109b72f-6268-4949-88ee-62863ae03b8a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-224553 ip                                                                            | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | -p addons-224553                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-224553 ssh curl -s                                                                   | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:52 UTC | 03 Jul 24 22:52 UTC |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | addons-224553 addons                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-224553 addons                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-224553 ip                                                                            | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:47:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:47:44.239130   17336 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:47:44.239267   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:44.239277   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:47:44.239284   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:44.239490   17336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 22:47:44.240113   17336 out.go:298] Setting JSON to false
	I0703 22:47:44.240903   17336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1804,"bootTime":1720045060,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:47:44.240963   17336 start.go:139] virtualization: kvm guest
	I0703 22:47:44.243247   17336 out.go:177] * [addons-224553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:47:44.244809   17336 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 22:47:44.244809   17336 notify.go:220] Checking for updates...
	I0703 22:47:44.246262   17336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:47:44.247628   17336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:47:44.249031   17336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:44.250339   17336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 22:47:44.251461   17336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 22:47:44.252714   17336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:47:44.285181   17336 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 22:47:44.286416   17336 start.go:297] selected driver: kvm2
	I0703 22:47:44.286452   17336 start.go:901] validating driver "kvm2" against <nil>
	I0703 22:47:44.286469   17336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 22:47:44.287156   17336 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:44.287225   17336 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:47:44.302745   17336 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:47:44.302792   17336 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 22:47:44.303119   17336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:47:44.303157   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:47:44.303171   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:47:44.303184   17336 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 22:47:44.303248   17336 start.go:340] cluster config:
	{Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:47:44.303378   17336 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:44.305200   17336 out.go:177] * Starting "addons-224553" primary control-plane node in "addons-224553" cluster
	I0703 22:47:44.306321   17336 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:47:44.306351   17336 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 22:47:44.306360   17336 cache.go:56] Caching tarball of preloaded images
	I0703 22:47:44.306437   17336 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 22:47:44.306448   17336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 22:47:44.306780   17336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json ...
	I0703 22:47:44.306805   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json: {Name:mkffec6b993c5054368f9460bbad4774d4ef1599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:47:44.306928   17336 start.go:360] acquireMachinesLock for addons-224553: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 22:47:44.306969   17336 start.go:364] duration metric: took 29.595µs to acquireMachinesLock for "addons-224553"
	I0703 22:47:44.306985   17336 start.go:93] Provisioning new machine with config: &{Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 22:47:44.307039   17336 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 22:47:44.308656   17336 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0703 22:47:44.308808   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:47:44.308862   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:47:44.323825   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0703 22:47:44.324290   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:47:44.324903   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:47:44.324927   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:47:44.325353   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:47:44.325608   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:47:44.325809   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:47:44.326002   17336 start.go:159] libmachine.API.Create for "addons-224553" (driver="kvm2")
	I0703 22:47:44.326030   17336 client.go:168] LocalClient.Create starting
	I0703 22:47:44.326068   17336 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 22:47:44.490412   17336 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 22:47:44.658147   17336 main.go:141] libmachine: Running pre-create checks...
	I0703 22:47:44.658169   17336 main.go:141] libmachine: (addons-224553) Calling .PreCreateCheck
	I0703 22:47:44.660381   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:47:44.660915   17336 main.go:141] libmachine: Creating machine...
	I0703 22:47:44.660932   17336 main.go:141] libmachine: (addons-224553) Calling .Create
	I0703 22:47:44.661114   17336 main.go:141] libmachine: (addons-224553) Creating KVM machine...
	I0703 22:47:44.662189   17336 main.go:141] libmachine: (addons-224553) DBG | found existing default KVM network
	I0703 22:47:44.662890   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.662755   17358 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0703 22:47:44.662913   17336 main.go:141] libmachine: (addons-224553) DBG | created network xml: 
	I0703 22:47:44.662924   17336 main.go:141] libmachine: (addons-224553) DBG | <network>
	I0703 22:47:44.662933   17336 main.go:141] libmachine: (addons-224553) DBG |   <name>mk-addons-224553</name>
	I0703 22:47:44.662939   17336 main.go:141] libmachine: (addons-224553) DBG |   <dns enable='no'/>
	I0703 22:47:44.662948   17336 main.go:141] libmachine: (addons-224553) DBG |   
	I0703 22:47:44.662959   17336 main.go:141] libmachine: (addons-224553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 22:47:44.662971   17336 main.go:141] libmachine: (addons-224553) DBG |     <dhcp>
	I0703 22:47:44.662987   17336 main.go:141] libmachine: (addons-224553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 22:47:44.663001   17336 main.go:141] libmachine: (addons-224553) DBG |     </dhcp>
	I0703 22:47:44.663051   17336 main.go:141] libmachine: (addons-224553) DBG |   </ip>
	I0703 22:47:44.663079   17336 main.go:141] libmachine: (addons-224553) DBG |   
	I0703 22:47:44.663092   17336 main.go:141] libmachine: (addons-224553) DBG | </network>
	I0703 22:47:44.663105   17336 main.go:141] libmachine: (addons-224553) DBG | 
	I0703 22:47:44.668342   17336 main.go:141] libmachine: (addons-224553) DBG | trying to create private KVM network mk-addons-224553 192.168.39.0/24...
	I0703 22:47:44.734801   17336 main.go:141] libmachine: (addons-224553) DBG | private KVM network mk-addons-224553 192.168.39.0/24 created
	I0703 22:47:44.734828   17336 main.go:141] libmachine: (addons-224553) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 ...
	I0703 22:47:44.734850   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.734793   17358 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:44.734872   17336 main.go:141] libmachine: (addons-224553) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 22:47:44.734968   17336 main.go:141] libmachine: (addons-224553) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 22:47:44.968526   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.968399   17358 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa...
	I0703 22:47:45.084020   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:45.083868   17358 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/addons-224553.rawdisk...
	I0703 22:47:45.084052   17336 main.go:141] libmachine: (addons-224553) DBG | Writing magic tar header
	I0703 22:47:45.084095   17336 main.go:141] libmachine: (addons-224553) DBG | Writing SSH key tar header
	I0703 22:47:45.084116   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:45.083988   17358 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 ...
	I0703 22:47:45.084136   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 (perms=drwx------)
	I0703 22:47:45.084158   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 22:47:45.084169   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 22:47:45.084182   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 22:47:45.084193   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 22:47:45.084208   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553
	I0703 22:47:45.084226   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 22:47:45.084244   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 22:47:45.084252   17336 main.go:141] libmachine: (addons-224553) Creating domain...
	I0703 22:47:45.084268   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:45.084281   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 22:47:45.084293   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 22:47:45.084303   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins
	I0703 22:47:45.084315   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home
	I0703 22:47:45.084328   17336 main.go:141] libmachine: (addons-224553) DBG | Skipping /home - not owner
	I0703 22:47:45.085333   17336 main.go:141] libmachine: (addons-224553) define libvirt domain using xml: 
	I0703 22:47:45.085355   17336 main.go:141] libmachine: (addons-224553) <domain type='kvm'>
	I0703 22:47:45.085362   17336 main.go:141] libmachine: (addons-224553)   <name>addons-224553</name>
	I0703 22:47:45.085367   17336 main.go:141] libmachine: (addons-224553)   <memory unit='MiB'>4000</memory>
	I0703 22:47:45.085372   17336 main.go:141] libmachine: (addons-224553)   <vcpu>2</vcpu>
	I0703 22:47:45.085376   17336 main.go:141] libmachine: (addons-224553)   <features>
	I0703 22:47:45.085381   17336 main.go:141] libmachine: (addons-224553)     <acpi/>
	I0703 22:47:45.085385   17336 main.go:141] libmachine: (addons-224553)     <apic/>
	I0703 22:47:45.085390   17336 main.go:141] libmachine: (addons-224553)     <pae/>
	I0703 22:47:45.085397   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085402   17336 main.go:141] libmachine: (addons-224553)   </features>
	I0703 22:47:45.085417   17336 main.go:141] libmachine: (addons-224553)   <cpu mode='host-passthrough'>
	I0703 22:47:45.085432   17336 main.go:141] libmachine: (addons-224553)   
	I0703 22:47:45.085451   17336 main.go:141] libmachine: (addons-224553)   </cpu>
	I0703 22:47:45.085459   17336 main.go:141] libmachine: (addons-224553)   <os>
	I0703 22:47:45.085464   17336 main.go:141] libmachine: (addons-224553)     <type>hvm</type>
	I0703 22:47:45.085501   17336 main.go:141] libmachine: (addons-224553)     <boot dev='cdrom'/>
	I0703 22:47:45.085517   17336 main.go:141] libmachine: (addons-224553)     <boot dev='hd'/>
	I0703 22:47:45.085528   17336 main.go:141] libmachine: (addons-224553)     <bootmenu enable='no'/>
	I0703 22:47:45.085539   17336 main.go:141] libmachine: (addons-224553)   </os>
	I0703 22:47:45.085549   17336 main.go:141] libmachine: (addons-224553)   <devices>
	I0703 22:47:45.085559   17336 main.go:141] libmachine: (addons-224553)     <disk type='file' device='cdrom'>
	I0703 22:47:45.085574   17336 main.go:141] libmachine: (addons-224553)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/boot2docker.iso'/>
	I0703 22:47:45.085594   17336 main.go:141] libmachine: (addons-224553)       <target dev='hdc' bus='scsi'/>
	I0703 22:47:45.085605   17336 main.go:141] libmachine: (addons-224553)       <readonly/>
	I0703 22:47:45.085616   17336 main.go:141] libmachine: (addons-224553)     </disk>
	I0703 22:47:45.085629   17336 main.go:141] libmachine: (addons-224553)     <disk type='file' device='disk'>
	I0703 22:47:45.085643   17336 main.go:141] libmachine: (addons-224553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 22:47:45.085660   17336 main.go:141] libmachine: (addons-224553)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/addons-224553.rawdisk'/>
	I0703 22:47:45.085671   17336 main.go:141] libmachine: (addons-224553)       <target dev='hda' bus='virtio'/>
	I0703 22:47:45.085683   17336 main.go:141] libmachine: (addons-224553)     </disk>
	I0703 22:47:45.085694   17336 main.go:141] libmachine: (addons-224553)     <interface type='network'>
	I0703 22:47:45.085708   17336 main.go:141] libmachine: (addons-224553)       <source network='mk-addons-224553'/>
	I0703 22:47:45.085723   17336 main.go:141] libmachine: (addons-224553)       <model type='virtio'/>
	I0703 22:47:45.085735   17336 main.go:141] libmachine: (addons-224553)     </interface>
	I0703 22:47:45.085749   17336 main.go:141] libmachine: (addons-224553)     <interface type='network'>
	I0703 22:47:45.085762   17336 main.go:141] libmachine: (addons-224553)       <source network='default'/>
	I0703 22:47:45.085774   17336 main.go:141] libmachine: (addons-224553)       <model type='virtio'/>
	I0703 22:47:45.085802   17336 main.go:141] libmachine: (addons-224553)     </interface>
	I0703 22:47:45.085822   17336 main.go:141] libmachine: (addons-224553)     <serial type='pty'>
	I0703 22:47:45.085829   17336 main.go:141] libmachine: (addons-224553)       <target port='0'/>
	I0703 22:47:45.085837   17336 main.go:141] libmachine: (addons-224553)     </serial>
	I0703 22:47:45.085845   17336 main.go:141] libmachine: (addons-224553)     <console type='pty'>
	I0703 22:47:45.085858   17336 main.go:141] libmachine: (addons-224553)       <target type='serial' port='0'/>
	I0703 22:47:45.085866   17336 main.go:141] libmachine: (addons-224553)     </console>
	I0703 22:47:45.085871   17336 main.go:141] libmachine: (addons-224553)     <rng model='virtio'>
	I0703 22:47:45.085880   17336 main.go:141] libmachine: (addons-224553)       <backend model='random'>/dev/random</backend>
	I0703 22:47:45.085887   17336 main.go:141] libmachine: (addons-224553)     </rng>
	I0703 22:47:45.085892   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085898   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085903   17336 main.go:141] libmachine: (addons-224553)   </devices>
	I0703 22:47:45.085909   17336 main.go:141] libmachine: (addons-224553) </domain>
	I0703 22:47:45.085917   17336 main.go:141] libmachine: (addons-224553) 
	I0703 22:47:45.091970   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:76:64:42 in network default
	I0703 22:47:45.092495   17336 main.go:141] libmachine: (addons-224553) Ensuring networks are active...
	I0703 22:47:45.092511   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:45.093263   17336 main.go:141] libmachine: (addons-224553) Ensuring network default is active
	I0703 22:47:45.093560   17336 main.go:141] libmachine: (addons-224553) Ensuring network mk-addons-224553 is active
	I0703 22:47:45.094003   17336 main.go:141] libmachine: (addons-224553) Getting domain xml...
	I0703 22:47:45.094686   17336 main.go:141] libmachine: (addons-224553) Creating domain...
	I0703 22:47:46.479393   17336 main.go:141] libmachine: (addons-224553) Waiting to get IP...
	I0703 22:47:46.480260   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:46.480769   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:46.480800   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:46.480681   17358 retry.go:31] will retry after 205.766911ms: waiting for machine to come up
	I0703 22:47:46.688327   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:46.688780   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:46.688802   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:46.688738   17358 retry.go:31] will retry after 315.450273ms: waiting for machine to come up
	I0703 22:47:47.006469   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.006855   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.006889   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.006809   17358 retry.go:31] will retry after 409.3055ms: waiting for machine to come up
	I0703 22:47:47.417165   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.417574   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.417603   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.417525   17358 retry.go:31] will retry after 508.405078ms: waiting for machine to come up
	I0703 22:47:47.927118   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.927513   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.927548   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.927477   17358 retry.go:31] will retry after 608.324614ms: waiting for machine to come up
	I0703 22:47:48.537296   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:48.537727   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:48.537749   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:48.537698   17358 retry.go:31] will retry after 719.08655ms: waiting for machine to come up
	I0703 22:47:49.258560   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:49.259075   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:49.259098   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:49.259040   17358 retry.go:31] will retry after 983.818223ms: waiting for machine to come up
	I0703 22:47:50.244600   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:50.244993   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:50.245017   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:50.244936   17358 retry.go:31] will retry after 1.342762679s: waiting for machine to come up
	I0703 22:47:51.589590   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:51.590049   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:51.590077   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:51.589994   17358 retry.go:31] will retry after 1.251250163s: waiting for machine to come up
	I0703 22:47:52.842419   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:52.842746   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:52.842771   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:52.842707   17358 retry.go:31] will retry after 1.810121664s: waiting for machine to come up
	I0703 22:47:54.654863   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:54.655376   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:54.655403   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:54.655343   17358 retry.go:31] will retry after 2.106483987s: waiting for machine to come up
	I0703 22:47:56.765766   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:56.766201   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:56.766230   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:56.766170   17358 retry.go:31] will retry after 2.398145191s: waiting for machine to come up
	I0703 22:47:59.167619   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:59.168038   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:59.168129   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:59.168021   17358 retry.go:31] will retry after 3.976178413s: waiting for machine to come up
	I0703 22:48:03.148808   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:03.149310   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:48:03.149344   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:48:03.149245   17358 retry.go:31] will retry after 3.742210847s: waiting for machine to come up
	I0703 22:48:06.894985   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.895436   17336 main.go:141] libmachine: (addons-224553) Found IP for machine: 192.168.39.226
	I0703 22:48:06.895462   17336 main.go:141] libmachine: (addons-224553) Reserving static IP address...
	I0703 22:48:06.895479   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has current primary IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.895908   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find host DHCP lease matching {name: "addons-224553", mac: "52:54:00:d2:17:a3", ip: "192.168.39.226"} in network mk-addons-224553
	I0703 22:48:06.974820   17336 main.go:141] libmachine: (addons-224553) DBG | Getting to WaitForSSH function...
	I0703 22:48:06.974847   17336 main.go:141] libmachine: (addons-224553) Reserved static IP address: 192.168.39.226
	I0703 22:48:06.974860   17336 main.go:141] libmachine: (addons-224553) Waiting for SSH to be available...
	I0703 22:48:06.977405   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.977734   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:06.977782   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.977836   17336 main.go:141] libmachine: (addons-224553) DBG | Using SSH client type: external
	I0703 22:48:06.977865   17336 main.go:141] libmachine: (addons-224553) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa (-rw-------)
	I0703 22:48:06.977914   17336 main.go:141] libmachine: (addons-224553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 22:48:06.977934   17336 main.go:141] libmachine: (addons-224553) DBG | About to run SSH command:
	I0703 22:48:06.977947   17336 main.go:141] libmachine: (addons-224553) DBG | exit 0
	I0703 22:48:07.116257   17336 main.go:141] libmachine: (addons-224553) DBG | SSH cmd err, output: <nil>: 
	I0703 22:48:07.116532   17336 main.go:141] libmachine: (addons-224553) KVM machine creation complete!
	I0703 22:48:07.116801   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:48:07.117289   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:07.117501   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:07.117670   17336 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 22:48:07.117682   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:07.118847   17336 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 22:48:07.118860   17336 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 22:48:07.118865   17336 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 22:48:07.118870   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.121123   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.121520   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.121562   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.121694   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.121895   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.122050   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.122183   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.122348   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.122595   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.122608   17336 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 22:48:07.235346   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:48:07.235373   17336 main.go:141] libmachine: Detecting the provisioner...
	I0703 22:48:07.235385   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.238253   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.238712   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.238735   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.238940   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.239141   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.239323   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.239497   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.239679   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.239901   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.239915   17336 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 22:48:07.352770   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 22:48:07.352860   17336 main.go:141] libmachine: found compatible host: buildroot
	I0703 22:48:07.352872   17336 main.go:141] libmachine: Provisioning with buildroot...
	I0703 22:48:07.352883   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.353161   17336 buildroot.go:166] provisioning hostname "addons-224553"
	I0703 22:48:07.353189   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.353396   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.356110   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.356467   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.356503   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.356561   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.356745   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.356882   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.357042   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.357276   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.357475   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.357488   17336 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-224553 && echo "addons-224553" | sudo tee /etc/hostname
	I0703 22:48:07.483466   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-224553
	
	I0703 22:48:07.483502   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.486162   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.486530   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.486556   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.486720   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.486912   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.487064   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.487152   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.487265   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.487421   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.487436   17336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-224553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-224553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-224553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 22:48:07.609924   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:48:07.609956   17336 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 22:48:07.610012   17336 buildroot.go:174] setting up certificates
	I0703 22:48:07.610034   17336 provision.go:84] configureAuth start
	I0703 22:48:07.610052   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.610376   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:07.613087   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.613410   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.613439   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.613616   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.615445   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.615817   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.615838   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.616028   17336 provision.go:143] copyHostCerts
	I0703 22:48:07.616094   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 22:48:07.616206   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 22:48:07.616268   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 22:48:07.616313   17336 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.addons-224553 san=[127.0.0.1 192.168.39.226 addons-224553 localhost minikube]
	I0703 22:48:07.900637   17336 provision.go:177] copyRemoteCerts
	I0703 22:48:07.900692   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 22:48:07.900712   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.903599   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.903948   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.903979   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.904116   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.904332   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.904497   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.904649   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:07.990895   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 22:48:08.015932   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 22:48:08.041859   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 22:48:08.066848   17336 provision.go:87] duration metric: took 456.795917ms to configureAuth
	I0703 22:48:08.066878   17336 buildroot.go:189] setting minikube options for container-runtime
	I0703 22:48:08.067066   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:08.067155   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.069855   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.070188   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.070221   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.070344   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.070539   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.070692   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.070828   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.070960   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:08.071116   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:08.071129   17336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 22:48:08.349251   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 22:48:08.349285   17336 main.go:141] libmachine: Checking connection to Docker...
	I0703 22:48:08.349293   17336 main.go:141] libmachine: (addons-224553) Calling .GetURL
	I0703 22:48:08.350789   17336 main.go:141] libmachine: (addons-224553) DBG | Using libvirt version 6000000
	I0703 22:48:08.352930   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.353254   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.353283   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.353481   17336 main.go:141] libmachine: Docker is up and running!
	I0703 22:48:08.353502   17336 main.go:141] libmachine: Reticulating splines...
	I0703 22:48:08.353510   17336 client.go:171] duration metric: took 24.027472431s to LocalClient.Create
	I0703 22:48:08.353533   17336 start.go:167] duration metric: took 24.027532716s to libmachine.API.Create "addons-224553"
	I0703 22:48:08.353550   17336 start.go:293] postStartSetup for "addons-224553" (driver="kvm2")
	I0703 22:48:08.353559   17336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 22:48:08.353576   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.353809   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 22:48:08.353835   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.356217   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.356541   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.356568   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.356734   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.356906   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.357062   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.357213   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.443397   17336 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 22:48:08.447973   17336 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 22:48:08.448006   17336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 22:48:08.448101   17336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 22:48:08.448132   17336 start.go:296] duration metric: took 94.575779ms for postStartSetup
	I0703 22:48:08.448168   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:48:08.448683   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:08.451317   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.451627   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.451653   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.451920   17336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json ...
	I0703 22:48:08.452148   17336 start.go:128] duration metric: took 24.145099273s to createHost
	I0703 22:48:08.452171   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.454411   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.454673   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.454706   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.454864   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.455014   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.455304   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.455504   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.455654   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:08.455825   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:08.455838   17336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 22:48:08.569131   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720046888.544149608
	
	I0703 22:48:08.569156   17336 fix.go:216] guest clock: 1720046888.544149608
	I0703 22:48:08.569164   17336 fix.go:229] Guest: 2024-07-03 22:48:08.544149608 +0000 UTC Remote: 2024-07-03 22:48:08.452160548 +0000 UTC m=+24.245255484 (delta=91.98906ms)
	I0703 22:48:08.569200   17336 fix.go:200] guest clock delta is within tolerance: 91.98906ms
	I0703 22:48:08.569205   17336 start.go:83] releasing machines lock for "addons-224553", held for 24.262227321s
	I0703 22:48:08.569237   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.569551   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:08.572518   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.572882   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.572902   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.573136   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573607   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573770   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573874   17336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 22:48:08.573916   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.574187   17336 ssh_runner.go:195] Run: cat /version.json
	I0703 22:48:08.574210   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.576446   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.576756   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.576785   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.576807   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.577052   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.577244   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.577294   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.577319   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.577432   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.577580   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.577597   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.577717   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.577832   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.577943   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.657409   17336 ssh_runner.go:195] Run: systemctl --version
	I0703 22:48:08.688879   17336 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 22:48:08.848798   17336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 22:48:08.855035   17336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 22:48:08.855097   17336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 22:48:08.872890   17336 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 22:48:08.872916   17336 start.go:494] detecting cgroup driver to use...
	I0703 22:48:08.872990   17336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 22:48:08.890622   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 22:48:08.905426   17336 docker.go:217] disabling cri-docker service (if available) ...
	I0703 22:48:08.905498   17336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 22:48:08.920142   17336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 22:48:08.934911   17336 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 22:48:09.058952   17336 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 22:48:09.205853   17336 docker.go:233] disabling docker service ...
	I0703 22:48:09.205984   17336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 22:48:09.221246   17336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 22:48:09.235716   17336 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 22:48:09.377464   17336 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 22:48:09.506073   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 22:48:09.520805   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 22:48:09.540410   17336 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 22:48:09.540481   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.551473   17336 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 22:48:09.551536   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.562641   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.574009   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.584830   17336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 22:48:09.596084   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.606890   17336 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.625183   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.635698   17336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 22:48:09.645165   17336 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 22:48:09.645223   17336 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 22:48:09.657629   17336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 22:48:09.668281   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:09.781027   17336 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 22:48:09.919472   17336 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 22:48:09.919562   17336 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 22:48:09.924453   17336 start.go:562] Will wait 60s for crictl version
	I0703 22:48:09.924519   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:48:09.928576   17336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 22:48:09.977626   17336 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 22:48:09.977751   17336 ssh_runner.go:195] Run: crio --version
	I0703 22:48:10.011910   17336 ssh_runner.go:195] Run: crio --version
	I0703 22:48:10.047630   17336 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 22:48:10.048920   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:10.051376   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:10.051751   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:10.051775   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:10.052026   17336 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 22:48:10.056721   17336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 22:48:10.070759   17336 kubeadm.go:877] updating cluster {Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 22:48:10.070895   17336 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:48:10.070945   17336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 22:48:10.106524   17336 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 22:48:10.106593   17336 ssh_runner.go:195] Run: which lz4
	I0703 22:48:10.110836   17336 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 22:48:10.115233   17336 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 22:48:10.115274   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 22:48:11.509161   17336 crio.go:462] duration metric: took 1.398368216s to copy over tarball
	I0703 22:48:11.509254   17336 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 22:48:13.866860   17336 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.357576208s)
	I0703 22:48:13.866893   17336 crio.go:469] duration metric: took 2.35770323s to extract the tarball
	I0703 22:48:13.866902   17336 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 22:48:13.904432   17336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 22:48:13.946723   17336 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 22:48:13.946742   17336 cache_images.go:84] Images are preloaded, skipping loading
	I0703 22:48:13.946751   17336 kubeadm.go:928] updating node { 192.168.39.226 8443 v1.30.2 crio true true} ...
	I0703 22:48:13.946874   17336 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-224553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 22:48:13.946936   17336 ssh_runner.go:195] Run: crio config
	I0703 22:48:13.992450   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:48:13.992468   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:48:13.992477   17336 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 22:48:13.992497   17336 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-224553 NodeName:addons-224553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 22:48:13.993083   17336 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-224553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 22:48:13.993138   17336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 22:48:14.003290   17336 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 22:48:14.003358   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 22:48:14.013110   17336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 22:48:14.030435   17336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 22:48:14.047726   17336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0703 22:48:14.064914   17336 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0703 22:48:14.069291   17336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 22:48:14.082770   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:14.203545   17336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:48:14.221435   17336 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553 for IP: 192.168.39.226
	I0703 22:48:14.221461   17336 certs.go:194] generating shared ca certs ...
	I0703 22:48:14.221478   17336 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.221620   17336 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 22:48:14.367891   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt ...
	I0703 22:48:14.367920   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt: {Name:mk44cd94bcae977347c648f7581bc4eb639e6e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.368172   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key ...
	I0703 22:48:14.368204   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key: {Name:mk588f0e29902079d3d139aaf98632aab9ca8ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.368322   17336 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 22:48:14.430890   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt ...
	I0703 22:48:14.430920   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt: {Name:mk7c70f9ef666e5494d5b280d30b8c3aa9020f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.431089   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key ...
	I0703 22:48:14.431102   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key: {Name:mk5eedc42a2d3889f265a1577b7d508df68e95e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.431200   17336 certs.go:256] generating profile certs ...
	I0703 22:48:14.431254   17336 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key
	I0703 22:48:14.431269   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt with IP's: []
	I0703 22:48:14.658106   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt ...
	I0703 22:48:14.658138   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: {Name:mk760625e26a9f70ddadef95c5849449332ef189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.658322   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key ...
	I0703 22:48:14.658336   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key: {Name:mk687ec39961b5eecf09a22b78c6b0a026328208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.658441   17336 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785
	I0703 22:48:14.658463   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.226]
	I0703 22:48:14.748918   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 ...
	I0703 22:48:14.748950   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785: {Name:mkb23c6c675fae3f20c7c032aeade4dff7e80d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.749115   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785 ...
	I0703 22:48:14.749130   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785: {Name:mk9f6660627d2c7616e692c4373a94c3a5262e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.749216   17336 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt
	I0703 22:48:14.749293   17336 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key
	I0703 22:48:14.749346   17336 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key
	I0703 22:48:14.749389   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt with IP's: []
	I0703 22:48:14.872491   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt ...
	I0703 22:48:14.872519   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt: {Name:mk6bef5e0380657d2d4b606024ec41f1a0380b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.872672   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key ...
	I0703 22:48:14.872682   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key: {Name:mk21e3c14b9e64aa3d0c956246001f57519c26ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.872838   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 22:48:14.872871   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 22:48:14.872894   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 22:48:14.872916   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 22:48:14.873490   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 22:48:14.911108   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 22:48:14.943323   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 22:48:14.976483   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 22:48:15.004167   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0703 22:48:15.030815   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 22:48:15.058735   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 22:48:15.085824   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 22:48:15.112839   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 22:48:15.139748   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 22:48:15.158980   17336 ssh_runner.go:195] Run: openssl version
	I0703 22:48:15.165615   17336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 22:48:15.178437   17336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.184027   17336 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.184096   17336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.190536   17336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 22:48:15.202900   17336 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 22:48:15.207605   17336 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 22:48:15.207663   17336 kubeadm.go:391] StartCluster: {Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:48:15.207748   17336 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 22:48:15.207812   17336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 22:48:15.247710   17336 cri.go:89] found id: ""
	I0703 22:48:15.247784   17336 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 22:48:15.258745   17336 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 22:48:15.270139   17336 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 22:48:15.281463   17336 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 22:48:15.281487   17336 kubeadm.go:156] found existing configuration files:
	
	I0703 22:48:15.281542   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 22:48:15.291568   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 22:48:15.291636   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 22:48:15.302447   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 22:48:15.312964   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 22:48:15.313015   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 22:48:15.323858   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 22:48:15.334567   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 22:48:15.334623   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 22:48:15.345341   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 22:48:15.355789   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 22:48:15.355848   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 22:48:15.366807   17336 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 22:48:15.569319   17336 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 22:48:25.849286   17336 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 22:48:25.849351   17336 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 22:48:25.849437   17336 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 22:48:25.849588   17336 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 22:48:25.849765   17336 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 22:48:25.849876   17336 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 22:48:25.851491   17336 out.go:204]   - Generating certificates and keys ...
	I0703 22:48:25.851590   17336 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 22:48:25.851676   17336 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 22:48:25.851774   17336 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 22:48:25.851854   17336 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 22:48:25.851940   17336 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 22:48:25.852015   17336 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 22:48:25.852092   17336 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 22:48:25.852235   17336 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-224553 localhost] and IPs [192.168.39.226 127.0.0.1 ::1]
	I0703 22:48:25.852294   17336 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 22:48:25.852427   17336 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-224553 localhost] and IPs [192.168.39.226 127.0.0.1 ::1]
	I0703 22:48:25.852485   17336 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 22:48:25.852561   17336 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 22:48:25.852602   17336 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 22:48:25.852654   17336 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 22:48:25.852705   17336 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 22:48:25.852757   17336 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 22:48:25.852803   17336 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 22:48:25.852904   17336 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 22:48:25.852963   17336 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 22:48:25.853038   17336 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 22:48:25.853114   17336 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 22:48:25.854692   17336 out.go:204]   - Booting up control plane ...
	I0703 22:48:25.854805   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 22:48:25.854910   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 22:48:25.854984   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 22:48:25.855073   17336 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 22:48:25.855156   17336 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 22:48:25.855225   17336 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 22:48:25.855388   17336 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 22:48:25.855510   17336 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 22:48:25.855602   17336 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.740984ms
	I0703 22:48:25.855697   17336 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 22:48:25.855780   17336 kubeadm.go:309] [api-check] The API server is healthy after 5.502236524s
	I0703 22:48:25.855953   17336 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 22:48:25.856086   17336 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 22:48:25.856138   17336 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 22:48:25.856342   17336 kubeadm.go:309] [mark-control-plane] Marking the node addons-224553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 22:48:25.856426   17336 kubeadm.go:309] [bootstrap-token] Using token: x971cj.9c58key722wzoeyj
	I0703 22:48:25.857924   17336 out.go:204]   - Configuring RBAC rules ...
	I0703 22:48:25.858019   17336 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 22:48:25.858089   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 22:48:25.858215   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 22:48:25.858336   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 22:48:25.858444   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 22:48:25.858525   17336 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 22:48:25.858642   17336 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 22:48:25.858701   17336 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 22:48:25.858745   17336 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 22:48:25.858751   17336 kubeadm.go:309] 
	I0703 22:48:25.858800   17336 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 22:48:25.858805   17336 kubeadm.go:309] 
	I0703 22:48:25.858877   17336 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 22:48:25.858885   17336 kubeadm.go:309] 
	I0703 22:48:25.858921   17336 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 22:48:25.858992   17336 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 22:48:25.859065   17336 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 22:48:25.859074   17336 kubeadm.go:309] 
	I0703 22:48:25.859125   17336 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 22:48:25.859131   17336 kubeadm.go:309] 
	I0703 22:48:25.859171   17336 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 22:48:25.859177   17336 kubeadm.go:309] 
	I0703 22:48:25.859236   17336 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 22:48:25.859311   17336 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 22:48:25.859383   17336 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 22:48:25.859391   17336 kubeadm.go:309] 
	I0703 22:48:25.859474   17336 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 22:48:25.859548   17336 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 22:48:25.859555   17336 kubeadm.go:309] 
	I0703 22:48:25.859631   17336 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x971cj.9c58key722wzoeyj \
	I0703 22:48:25.859725   17336 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 22:48:25.859745   17336 kubeadm.go:309] 	--control-plane 
	I0703 22:48:25.859749   17336 kubeadm.go:309] 
	I0703 22:48:25.859817   17336 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 22:48:25.859823   17336 kubeadm.go:309] 
	I0703 22:48:25.859906   17336 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x971cj.9c58key722wzoeyj \
	I0703 22:48:25.860008   17336 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 22:48:25.860019   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:48:25.860025   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:48:25.861503   17336 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0703 22:48:25.862807   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0703 22:48:25.874755   17336 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0703 22:48:25.898038   17336 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 22:48:25.898167   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:25.898179   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-224553 minikube.k8s.io/updated_at=2024_07_03T22_48_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=addons-224553 minikube.k8s.io/primary=true
	I0703 22:48:25.933488   17336 ops.go:34] apiserver oom_adj: -16
	I0703 22:48:26.048446   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:26.549223   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:27.049228   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:27.548887   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:28.048734   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:28.548593   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:29.048734   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:29.548782   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:30.048764   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:30.549072   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:31.048936   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:31.548495   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:32.048493   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:32.549477   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:33.049265   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:33.549489   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:34.049091   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:34.549080   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:35.048475   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:35.549107   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:36.049204   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:36.549330   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:37.049178   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:37.548838   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:38.049511   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:38.548935   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.049031   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.548814   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.643398   17336 kubeadm.go:1107] duration metric: took 13.745302371s to wait for elevateKubeSystemPrivileges
	W0703 22:48:39.643441   17336 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0703 22:48:39.643449   17336 kubeadm.go:393] duration metric: took 24.435790735s to StartCluster
	I0703 22:48:39.643465   17336 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:39.643594   17336 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:48:39.643972   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:39.644169   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0703 22:48:39.644183   17336 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 22:48:39.644278   17336 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0703 22:48:39.644352   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:39.644386   17336 addons.go:69] Setting yakd=true in profile "addons-224553"
	I0703 22:48:39.644397   17336 addons.go:69] Setting metrics-server=true in profile "addons-224553"
	I0703 22:48:39.644413   17336 addons.go:69] Setting gcp-auth=true in profile "addons-224553"
	I0703 22:48:39.644427   17336 addons.go:234] Setting addon yakd=true in "addons-224553"
	I0703 22:48:39.644420   17336 addons.go:69] Setting inspektor-gadget=true in profile "addons-224553"
	I0703 22:48:39.644442   17336 addons.go:69] Setting volcano=true in profile "addons-224553"
	I0703 22:48:39.644448   17336 mustload.go:65] Loading cluster: addons-224553
	I0703 22:48:39.644460   17336 addons.go:234] Setting addon volcano=true in "addons-224553"
	I0703 22:48:39.644468   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644470   17336 addons.go:69] Setting helm-tiller=true in profile "addons-224553"
	I0703 22:48:39.644510   17336 addons.go:69] Setting ingress-dns=true in profile "addons-224553"
	I0703 22:48:39.644538   17336 addons.go:234] Setting addon ingress-dns=true in "addons-224553"
	I0703 22:48:39.644551   17336 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-224553"
	I0703 22:48:39.644568   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644596   17336 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-224553"
	I0703 22:48:39.644593   17336 addons.go:69] Setting cloud-spanner=true in profile "addons-224553"
	I0703 22:48:39.644620   17336 addons.go:234] Setting addon cloud-spanner=true in "addons-224553"
	I0703 22:48:39.644628   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644646   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644656   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:39.644904   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.644935   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644964   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.644986   17336 addons.go:69] Setting default-storageclass=true in profile "addons-224553"
	I0703 22:48:39.645033   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644433   17336 addons.go:69] Setting registry=true in profile "addons-224553"
	I0703 22:48:39.645087   17336 addons.go:234] Setting addon registry=true in "addons-224553"
	I0703 22:48:39.645034   17336 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-224553"
	I0703 22:48:39.644429   17336 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-224553"
	I0703 22:48:39.645114   17336 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-224553"
	I0703 22:48:39.645137   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644424   17336 addons.go:234] Setting addon metrics-server=true in "addons-224553"
	I0703 22:48:39.645227   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644973   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645337   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644541   17336 addons.go:234] Setting addon helm-tiller=true in "addons-224553"
	I0703 22:48:39.644491   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.645400   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.645445   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645465   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.645610   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645636   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.645759   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645780   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644462   17336 addons.go:234] Setting addon inspektor-gadget=true in "addons-224553"
	I0703 22:48:39.644496   17336 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-224553"
	I0703 22:48:39.645904   17336 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-224553"
	I0703 22:48:39.644503   17336 addons.go:69] Setting ingress=true in profile "addons-224553"
	I0703 22:48:39.645998   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646011   17336 addons.go:234] Setting addon ingress=true in "addons-224553"
	I0703 22:48:39.646021   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646047   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.646078   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646094   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646150   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644978   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646251   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644392   17336 addons.go:69] Setting storage-provisioner=true in profile "addons-224553"
	I0703 22:48:39.646311   17336 addons.go:234] Setting addon storage-provisioner=true in "addons-224553"
	I0703 22:48:39.646370   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646388   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644985   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646452   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646460   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646475   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646496   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646520   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644990   17336 addons.go:69] Setting volumesnapshots=true in profile "addons-224553"
	I0703 22:48:39.646586   17336 addons.go:234] Setting addon volumesnapshots=true in "addons-224553"
	I0703 22:48:39.646696   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.647021   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.647043   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.647045   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.647022   17336 out.go:177] * Verifying Kubernetes components...
	I0703 22:48:39.647446   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.647475   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.647661   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.648367   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.648399   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.660051   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:39.667037   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44937
	I0703 22:48:39.667489   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.668020   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.668048   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.668432   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.668963   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.668999   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.670986   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0703 22:48:39.671433   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.671916   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.671932   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.672318   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.672869   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.672904   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.673919   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0703 22:48:39.674346   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.674424   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0703 22:48:39.674704   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.675129   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.675145   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.675249   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.675258   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.675559   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.675676   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.675720   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.676608   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.676643   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.682597   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0703 22:48:39.682795   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0703 22:48:39.684950   17336 addons.go:234] Setting addon default-storageclass=true in "addons-224553"
	I0703 22:48:39.684994   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.685380   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.685415   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.686314   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.686893   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.687605   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.687615   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.687624   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.687632   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.687996   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.688592   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.688631   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.690232   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33419
	I0703 22:48:39.690272   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.691079   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.691119   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.691517   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.692093   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.692114   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.692987   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.693433   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.693472   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.698244   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0703 22:48:39.698327   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0703 22:48:39.698260   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0703 22:48:39.698869   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.698990   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.699435   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.699461   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.699762   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.699826   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.699846   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.700278   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.700314   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.700400   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.700832   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.700860   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.700916   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.701450   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.704195   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.708556   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.709502   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.709530   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.714514   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0703 22:48:39.716244   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.716777   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.716797   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.717168   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.722969   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0703 22:48:39.724242   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0703 22:48:39.724790   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.724826   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.725140   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.725355   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46019
	I0703 22:48:39.725889   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.725960   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.725981   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.726310   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.726410   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0703 22:48:39.726555   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.726805   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.726828   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.726884   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.727228   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.727357   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.727371   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.727419   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.727651   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.727768   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.727814   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.727947   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.728542   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.728562   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.729055   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.729774   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.729943   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.730457   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.732691   17336 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 22:48:39.732739   17336 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0703 22:48:39.733074   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0703 22:48:39.733343   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.734412   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.734698   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0703 22:48:39.734742   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0703 22:48:39.734774   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.734750   17336 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:48:39.734853   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 22:48:39.734877   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.734925   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.734946   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.735288   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.735361   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0703 22:48:39.735522   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.737841   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0703 22:48:39.737985   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0703 22:48:39.738252   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.738854   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.738870   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.739242   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.739288   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.739482   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.739826   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.739847   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.739982   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.740125   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0703 22:48:39.740339   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.740525   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.740690   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.740731   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.740972   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.741070   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.741173   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0703 22:48:39.741253   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.741294   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.741508   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.741820   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.742009   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.742271   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0703 22:48:39.742707   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.743793   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:39.744500   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0703 22:48:39.744519   17336 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0703 22:48:39.745643   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0703 22:48:39.746057   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0703 22:48:39.746076   17336 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0703 22:48:39.746095   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.746154   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:39.746850   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.747277   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0703 22:48:39.747608   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0703 22:48:39.747623   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.748108   17336 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0703 22:48:39.748124   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0703 22:48:39.748139   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.748144   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.748164   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.749106   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.749124   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.749511   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.750016   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.750329   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0703 22:48:39.750625   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.750656   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.750881   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.751436   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.751476   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.751491   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.751762   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.751958   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.752143   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.752330   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.752576   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0703 22:48:39.752884   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.752905   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0703 22:48:39.753529   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.753560   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.753562   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.753696   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0703 22:48:39.753713   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0703 22:48:39.753730   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.753810   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.754089   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.754112   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.754169   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.754296   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.755040   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.756245   17336 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-224553"
	I0703 22:48:39.756280   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.756634   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.756667   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.756904   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.757372   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.757470   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.757801   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.757833   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.757979   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.758127   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.758254   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.758386   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.759298   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.761022   17336 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0703 22:48:39.762114   17336 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0703 22:48:39.762133   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0703 22:48:39.762152   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.764710   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I0703 22:48:39.765406   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.765452   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43841
	I0703 22:48:39.766205   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.766233   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.766355   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.766383   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.766385   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.766476   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.766631   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.766927   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.767054   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.767084   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.767496   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.767723   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.767811   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.767831   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.767976   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0703 22:48:39.768725   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.768905   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.769114   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.769451   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0703 22:48:39.769601   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.770346   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.770886   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.770906   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.771269   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.771838   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.771893   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.771871   17336 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0703 22:48:39.772076   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44975
	I0703 22:48:39.772380   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.772403   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.772869   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.773098   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.773245   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.773366   17336 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0703 22:48:39.773380   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0703 22:48:39.773397   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.773581   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.773593   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.774022   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.774263   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.774220   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.774223   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0703 22:48:39.774478   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.774929   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.775214   17336 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0703 22:48:39.775477   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.775497   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.775834   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.776030   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.776380   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.776628   17336 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0703 22:48:39.776730   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0703 22:48:39.776755   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.777994   17336 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0703 22:48:39.778501   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.778600   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.778786   17336 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 22:48:39.778800   17336 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 22:48:39.778825   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.779101   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.779125   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.779278   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0703 22:48:39.779293   17336 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0703 22:48:39.779319   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.779569   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.779757   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.779909   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.780254   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.782486   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.782910   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.782928   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.782962   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.783145   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.783349   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.783522   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.783547   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.783551   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.783703   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.783760   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.784002   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.784043   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.784173   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.784195   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.784239   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.784388   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.784623   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.784802   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.784959   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.785108   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.788173   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0703 22:48:39.788722   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.789271   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.789295   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.789696   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.790050   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0703 22:48:39.790325   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.790370   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.790461   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.790951   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.790976   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.791329   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.791592   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.793317   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.793694   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.793732   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.799382   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38129
	I0703 22:48:39.799845   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0703 22:48:39.799907   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.800415   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.800435   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.800459   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.800846   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.800863   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.800918   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.801174   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.801239   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.801337   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.803307   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.803589   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.805456   17336 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0703 22:48:39.805468   17336 out.go:177]   - Using image docker.io/registry:2.8.3
	I0703 22:48:39.806684   17336 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0703 22:48:39.806703   17336 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0703 22:48:39.806726   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.807941   17336 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0703 22:48:39.809185   17336 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0703 22:48:39.809203   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0703 22:48:39.809224   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.810211   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0703 22:48:39.810378   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.810587   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.810650   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.810680   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.810816   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.810986   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.811101   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.811120   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.811203   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.811246   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0703 22:48:39.811386   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.811441   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.811507   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.811814   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.811999   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.812022   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.812992   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.813152   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.813238   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.813429   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.813935   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.813971   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.814587   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.814798   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.814845   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.814953   17336 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0703 22:48:39.814973   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.815016   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:39.815029   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:39.815143   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.815173   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:39.815151   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:39.815188   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:39.815195   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:39.815201   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:39.815368   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:39.815382   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	W0703 22:48:39.815450   17336 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0703 22:48:39.817615   17336 out.go:177]   - Using image docker.io/busybox:stable
	I0703 22:48:39.817963   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0703 22:48:39.818263   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.818722   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.818748   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.818842   17336 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0703 22:48:39.818857   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0703 22:48:39.818869   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.819026   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.819179   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.821216   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.822559   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.822807   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0703 22:48:39.822992   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.823011   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.823182   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.823386   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.823558   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.823666   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.823864   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0703 22:48:39.823909   17336 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0703 22:48:39.823926   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	W0703 22:48:39.825879   17336 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42352->192.168.39.226:22: read: connection reset by peer
	I0703 22:48:39.825904   17336 retry.go:31] will retry after 287.956133ms: ssh: handshake failed: read tcp 192.168.39.1:42352->192.168.39.226:22: read: connection reset by peer
	I0703 22:48:39.826655   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.827146   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.827165   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.827369   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.827564   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.827672   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.827764   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.829595   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0703 22:48:39.829917   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.830431   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.830446   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.830705   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.830926   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:40.079721   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 22:48:40.134233   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0703 22:48:40.181206   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:48:40.182841   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0703 22:48:40.204402   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0703 22:48:40.204423   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0703 22:48:40.256900   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0703 22:48:40.266734   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0703 22:48:40.266755   17336 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0703 22:48:40.282768   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0703 22:48:40.282796   17336 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0703 22:48:40.294893   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0703 22:48:40.294914   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0703 22:48:40.327400   17336 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0703 22:48:40.327433   17336 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0703 22:48:40.357620   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0703 22:48:40.357643   17336 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0703 22:48:40.411966   17336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:48:40.412017   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0703 22:48:40.421659   17336 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0703 22:48:40.421681   17336 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0703 22:48:40.447851   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0703 22:48:40.447895   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0703 22:48:40.516621   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0703 22:48:40.516648   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0703 22:48:40.519358   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0703 22:48:40.530210   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0703 22:48:40.530233   17336 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0703 22:48:40.559518   17336 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0703 22:48:40.559548   17336 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0703 22:48:40.623727   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0703 22:48:40.623754   17336 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0703 22:48:40.688492   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0703 22:48:40.688523   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0703 22:48:40.743066   17336 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0703 22:48:40.743093   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0703 22:48:40.749857   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0703 22:48:40.775816   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0703 22:48:40.878348   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0703 22:48:40.878377   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0703 22:48:40.904986   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0703 22:48:40.905005   17336 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0703 22:48:40.986224   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0703 22:48:40.986250   17336 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0703 22:48:40.988423   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0703 22:48:40.988440   17336 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0703 22:48:41.025646   17336 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0703 22:48:41.025662   17336 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0703 22:48:41.081952   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0703 22:48:41.202145   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0703 22:48:41.202180   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0703 22:48:41.258861   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0703 22:48:41.258885   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0703 22:48:41.275684   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0703 22:48:41.310924   17336 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:41.310958   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0703 22:48:41.330028   17336 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0703 22:48:41.330058   17336 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0703 22:48:41.650425   17336 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0703 22:48:41.650447   17336 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0703 22:48:41.674597   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0703 22:48:41.674617   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0703 22:48:41.685464   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0703 22:48:41.686467   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.606717364s)
	I0703 22:48:41.686500   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.686510   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.686801   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.686822   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.686833   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.686843   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.687198   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.687246   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.687258   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.693490   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.693514   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.693823   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.693845   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.755645   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:41.891776   17336 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0703 22:48:41.891808   17336 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0703 22:48:41.893902   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0703 22:48:41.893925   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0703 22:48:41.954505   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.820231954s)
	I0703 22:48:41.954551   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.954559   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.954851   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.954870   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.954885   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.954893   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.954854   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.955224   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.955258   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.955274   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:42.090381   17336 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0703 22:48:42.090399   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0703 22:48:42.195193   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0703 22:48:42.333722   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0703 22:48:42.333744   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0703 22:48:42.574233   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0703 22:48:42.574256   17336 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0703 22:48:42.956006   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0703 22:48:42.956027   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0703 22:48:43.409304   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0703 22:48:43.409332   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0703 22:48:43.813891   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0703 22:48:43.813922   17336 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0703 22:48:44.086864   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0703 22:48:45.073346   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.892093242s)
	I0703 22:48:45.073399   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:45.073414   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:45.073681   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:45.073780   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:45.073802   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:45.073813   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:45.073822   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:45.074137   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:45.074151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:46.913386   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0703 22:48:46.913429   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:46.916270   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:46.916674   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:46.916721   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:46.916826   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:46.917022   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:46.917203   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:46.917336   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:47.335079   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0703 22:48:47.448710   17336 addons.go:234] Setting addon gcp-auth=true in "addons-224553"
	I0703 22:48:47.448768   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:47.449210   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:47.449246   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:47.464361   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0703 22:48:47.464929   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:47.465485   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:47.465510   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:47.465889   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:47.466480   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:47.466509   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:47.482806   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0703 22:48:47.483221   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:47.483741   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:47.483763   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:47.484122   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:47.484368   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:47.486155   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:47.486403   17336 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0703 22:48:47.486435   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:47.489294   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:47.489781   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:47.489812   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:47.490022   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:47.490196   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:47.490352   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:47.490474   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:48.520052   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.337177521s)
	I0703 22:48:48.520098   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.263163243s)
	I0703 22:48:48.520132   17336 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.108089692s)
	I0703 22:48:48.520152   17336 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.108159677s)
	I0703 22:48:48.520159   17336 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0703 22:48:48.520181   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.000804671s)
	I0703 22:48:48.520202   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520215   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520136   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520281   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520318   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.770432201s)
	I0703 22:48:48.520107   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520339   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520346   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520355   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520422   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.744569261s)
	I0703 22:48:48.520440   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520448   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520503   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.438522753s)
	I0703 22:48:48.520517   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520525   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520580   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.520590   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.520599   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520607   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520609   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.244895008s)
	I0703 22:48:48.520625   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520648   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520675   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.835184153s)
	I0703 22:48:48.520701   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520707   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.520713   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520736   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.520763   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.520771   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520779   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520812   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.765134113s)
	W0703 22:48:48.520838   17336 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0703 22:48:48.520857   17336 retry.go:31] will retry after 344.93177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0703 22:48:48.520942   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.325716721s)
	I0703 22:48:48.520958   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520966   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521103   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521121   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521145   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521166   17336 addons.go:475] Verifying addon ingress=true in "addons-224553"
	I0703 22:48:48.521291   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521305   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521316   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521321   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521324   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521375   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521384   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521393   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521401   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521402   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521409   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521410   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521419   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521425   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521233   17336 node_ready.go:35] waiting up to 6m0s for node "addons-224553" to be "Ready" ...
	I0703 22:48:48.521597   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521608   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521687   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521704   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521714   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521740   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521757   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521764   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521726   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521782   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521790   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521799   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521857   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521865   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521926   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521936   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.522121   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.522145   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.522151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.522159   17336 addons.go:475] Verifying addon metrics-server=true in "addons-224553"
	I0703 22:48:48.523466   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.523497   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.523505   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.523512   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.523520   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.524653   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524684   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524692   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524700   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.524707   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.524746   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524752   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524751   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524775   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524780   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524782   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524987   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.524995   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.525370   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.525416   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.525439   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.525445   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.525455   17336 addons.go:475] Verifying addon registry=true in "addons-224553"
	I0703 22:48:48.525976   17336 out.go:177] * Verifying ingress addon...
	I0703 22:48:48.526728   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.526744   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.527714   17336 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-224553 service yakd-dashboard -n yakd-dashboard
	
	I0703 22:48:48.528459   17336 out.go:177] * Verifying registry addon...
	I0703 22:48:48.529125   17336 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0703 22:48:48.530895   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0703 22:48:48.544279   17336 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0703 22:48:48.544301   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:48.544581   17336 node_ready.go:49] node "addons-224553" has status "Ready":"True"
	I0703 22:48:48.544605   17336 node_ready.go:38] duration metric: took 23.04857ms for node "addons-224553" to be "Ready" ...
	I0703 22:48:48.544617   17336 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:48:48.555574   17336 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0703 22:48:48.555605   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:48.584978   17336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.593331   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.593354   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.593757   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.593773   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.593789   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.632188   17336 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.632214   17336 pod_ready.go:81] duration metric: took 47.208519ms for pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.632223   17336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.658073   17336 pod_ready.go:92] pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.658093   17336 pod_ready.go:81] duration metric: took 25.864268ms for pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.658103   17336 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.718601   17336 pod_ready.go:92] pod "etcd-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.718635   17336 pod_ready.go:81] duration metric: took 60.524732ms for pod "etcd-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.718648   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.751864   17336 pod_ready.go:92] pod "kube-apiserver-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.751902   17336 pod_ready.go:81] duration metric: took 33.245273ms for pod "kube-apiserver-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.751916   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.866992   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:48.925435   17336 pod_ready.go:92] pod "kube-controller-manager-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.925468   17336 pod_ready.go:81] duration metric: took 173.544287ms for pod "kube-controller-manager-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.925481   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ll2cf" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.024077   17336 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-224553" context rescaled to 1 replicas
	I0703 22:48:49.042161   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:49.042169   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:49.328058   17336 pod_ready.go:92] pod "kube-proxy-ll2cf" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:49.328086   17336 pod_ready.go:81] duration metric: took 402.597588ms for pod "kube-proxy-ll2cf" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.328100   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.536447   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:49.542914   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:49.725625   17336 pod_ready.go:92] pod "kube-scheduler-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:49.725649   17336 pod_ready.go:81] duration metric: took 397.540693ms for pod "kube-scheduler-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.725662   17336 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:50.033025   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:50.038628   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:50.549502   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:50.549633   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:50.935997   17336 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.449569037s)
	I0703 22:48:50.936002   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.849095813s)
	I0703 22:48:50.936124   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:50.936171   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:50.936465   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:50.936511   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:50.936524   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:50.936534   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:50.936573   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:50.936738   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:50.936750   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:50.936759   17336 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-224553"
	I0703 22:48:50.937948   17336 out.go:177] * Verifying csi-hostpath-driver addon...
	I0703 22:48:50.937957   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:50.939835   17336 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0703 22:48:50.940608   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0703 22:48:50.941066   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0703 22:48:50.941086   17336 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0703 22:48:50.963735   17336 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0703 22:48:50.963762   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:51.034662   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:51.053982   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:51.080242   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0703 22:48:51.080275   17336 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0703 22:48:51.187514   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0703 22:48:51.187546   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0703 22:48:51.256967   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0703 22:48:51.306847   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.439800793s)
	I0703 22:48:51.306908   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:51.306921   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:51.307172   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:51.307189   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:51.307199   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:51.307206   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:51.307511   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:51.307524   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:51.447777   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:51.534170   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:51.535693   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:51.732472   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:51.947074   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.033833   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:52.036335   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:52.464729   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.207719585s)
	I0703 22:48:52.464777   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:52.464793   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:52.465141   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:52.465161   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:52.465171   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:52.465179   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:52.465431   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:52.465488   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:52.467494   17336 addons.go:475] Verifying addon gcp-auth=true in "addons-224553"
	I0703 22:48:52.469991   17336 out.go:177] * Verifying gcp-auth addon...
	I0703 22:48:52.471955   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0703 22:48:52.511812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.524049   17336 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0703 22:48:52.524075   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:52.549409   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:52.558898   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:52.954579   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.976683   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:53.034301   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:53.039609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:53.446738   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:53.475920   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:53.534415   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:53.537046   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:53.737567   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:53.946922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:53.975798   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:54.035208   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:54.039131   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:54.448422   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:54.480086   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:54.540614   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:54.542761   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:54.952362   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:54.975883   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:55.033643   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:55.036663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:55.445733   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:55.475964   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:55.534669   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:55.536367   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:55.947156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:55.975354   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:56.315523   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:56.316904   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:56.319217   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:56.446711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:56.475734   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:56.534842   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:56.537034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:56.946968   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:56.976812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:57.033914   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:57.036923   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:57.447569   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:57.475558   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:57.535031   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:57.537556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:57.946887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:57.975865   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:58.036462   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:58.036725   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:58.450457   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:58.476884   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:58.534355   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:58.536376   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:58.732083   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:58.946476   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:58.976118   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:59.034397   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:59.035812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:59.446117   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:59.475067   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:59.535852   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:59.536290   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:59.946866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:59.976703   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:00.035325   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:00.035759   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:00.447836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:00.476370   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:00.534143   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:00.535587   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:00.946022   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:00.976436   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:01.037386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:01.037524   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:01.233904   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:01.446417   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:01.476267   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:01.534832   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:01.538034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:01.945529   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:01.976100   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:02.034313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:02.036866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:02.446286   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:02.475848   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:02.534063   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:02.535646   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:02.946211   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:02.975934   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:03.033930   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:03.036585   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:03.445904   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:03.476723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:03.533902   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:03.535870   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:03.734269   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:03.946463   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:03.975905   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:04.033777   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:04.036093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:04.446347   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:04.475958   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:04.536504   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:04.536762   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:04.946139   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:04.975645   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:05.034287   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:05.036556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:05.446268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:05.476195   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:05.534370   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:05.536193   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:05.946998   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:05.975338   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:06.034151   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:06.037541   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:06.232501   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:06.447178   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:06.475600   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:06.533384   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:06.536522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:06.946223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:06.975690   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:07.036226   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:07.037453   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:07.445602   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:07.476630   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:07.534855   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:07.538596   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:07.946441   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:07.976340   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:08.035593   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:08.035630   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:08.447287   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:08.476081   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:08.533789   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:08.535812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:08.733389   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:08.948672   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:08.975684   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:09.033613   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:09.036705   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:09.447700   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:09.476671   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:09.544067   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:09.548323   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:09.954454   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:09.983814   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:10.053608   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:10.054339   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:10.448236   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:10.475899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:10.534114   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:10.537697   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:10.947166   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:10.975844   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:11.034131   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:11.035959   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:11.233283   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:11.446487   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:11.475795   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:11.533718   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:11.536721   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:11.949324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:11.975899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:12.036603   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:12.037095   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:12.446515   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:12.475965   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:12.534903   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:12.535119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:12.946499   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:12.976088   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:13.036088   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:13.040087   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:13.447191   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:13.480160   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:13.534553   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:13.539300   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:13.732699   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:13.946196   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:13.975337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:14.034568   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:14.035909   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:14.446113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:14.475517   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:14.533501   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:14.536258   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:14.946659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:14.976647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:15.034066   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:15.036778   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:15.446179   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:15.475647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:15.534007   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:15.538773   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:15.951919   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:15.976018   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:16.037605   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:16.040631   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:16.232727   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:16.448690   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:16.476342   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:16.537954   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:16.538241   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:16.946116   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:16.976135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:17.035351   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:17.043002   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:17.447806   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:17.475898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:17.535795   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:17.542072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:17.946075   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:17.975643   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:18.033925   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:18.037796   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:18.233317   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:18.446683   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:18.476439   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:18.533504   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:18.537104   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:18.946659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:18.976834   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:19.033707   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:19.036401   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:19.446173   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:19.475985   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:19.534442   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:19.535991   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:19.946732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:19.975066   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:20.035931   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:20.036425   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:20.451587   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:20.476244   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:20.534800   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:20.537022   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:20.731511   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:20.946800   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:20.975248   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:21.035088   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:21.045919   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:21.448049   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:21.476410   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:21.535565   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:21.537220   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:21.946456   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:21.976041   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:22.034189   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:22.035713   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:22.446573   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:22.476945   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:22.534205   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:22.536609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:22.732321   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:22.946372   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:22.975753   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:23.033690   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:23.037343   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:23.447557   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:23.476543   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:23.533427   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:23.535905   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:23.946261   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:23.975637   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:24.033989   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:24.036746   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:24.446929   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:24.476093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:24.533914   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:24.535570   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:24.732586   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:24.947337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:24.977052   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:25.034857   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:25.037248   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:25.449316   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:25.477348   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:25.535468   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:25.536454   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:25.945964   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:25.975486   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:26.033748   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:26.036154   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:26.446060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:26.475489   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:26.533290   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:26.535994   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:26.951399   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:26.976632   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:27.033576   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:27.035970   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:27.232145   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:27.446322   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:27.476079   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:27.534079   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:27.535277   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:27.946898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:27.975368   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:28.032973   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:28.036331   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:28.447113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:28.475898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:28.534077   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:28.535976   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:28.945899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:28.975747   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:29.033865   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:29.036479   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:29.232696   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:29.447386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:29.475967   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:29.534676   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:29.537882   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:29.946928   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:29.975412   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:30.034335   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:30.035372   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:30.447694   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:30.476425   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:30.533430   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:30.536366   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:30.946810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:30.975695   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:31.034318   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:31.036520   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:31.448880   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:31.475861   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:31.533999   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:31.536745   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:31.732756   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:31.945711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:31.976779   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:32.034077   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:32.036977   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:32.446773   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:32.475348   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:32.535781   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:32.536028   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:32.946161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:32.976113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:33.035997   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:33.036128   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:33.447484   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:33.475822   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:33.534590   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:33.536437   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:33.946974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:33.975629   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:34.033809   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:34.037275   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:34.234546   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:34.448266   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:34.475187   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:34.536526   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:34.537012   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:34.946611   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:34.977720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:35.033543   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:35.035677   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:35.446515   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:35.477400   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:35.533804   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:35.536118   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:35.946412   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:35.976472   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:36.034183   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:36.037147   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:36.448561   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:36.476021   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:36.534447   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:36.538060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:36.731848   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:36.945571   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:36.976501   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:37.033632   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:37.035720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:37.446613   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:37.476115   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:37.534211   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:37.535121   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:37.946320   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:37.977430   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:38.034254   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:38.035556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:38.446481   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:38.476490   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:38.533498   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:38.538257   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:38.946579   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:38.976328   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:39.034213   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:39.036034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:39.232261   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:39.446279   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:39.477472   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:39.533179   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:39.535556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:39.946647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:39.976369   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:40.033502   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:40.036611   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:40.447269   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:40.481153   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:40.536974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:40.537793   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:40.946874   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:40.976419   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:41.033660   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:41.036024   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:41.447072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:41.475747   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:41.533699   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:41.536582   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:41.732214   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:41.946569   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:41.977530   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:42.033882   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:42.036359   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:42.446636   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:42.476134   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:42.535009   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:42.535035   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:42.947367   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:42.978559   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:43.033702   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:43.036269   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:43.445893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:43.475309   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:43.534629   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:43.536785   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:43.732881   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:43.945991   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:43.975455   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:44.033212   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:44.036873   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:44.446350   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:44.476861   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:44.533735   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:44.537007   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:44.947630   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:44.976215   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:45.034164   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:45.035251   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:45.447080   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:45.475946   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:45.534041   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:45.536618   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:45.733649   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:45.947663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:45.976897   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:46.034278   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:46.037083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:46.448350   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:46.574743   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:46.576323   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:46.576666   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:46.948135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:46.976382   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:47.035707   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:47.037364   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:47.449718   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:47.476497   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:47.534110   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:47.536278   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:47.946725   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:47.976156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:48.034268   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:48.035739   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:48.232160   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:48.447341   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:48.476223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:48.535017   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:48.536661   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:48.946781   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:48.975324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:49.034411   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:49.036729   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:49.446667   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:49.476440   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:49.533112   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:49.536362   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:49.946836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:49.975675   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:50.033793   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:50.036206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:50.232834   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:50.449637   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:50.476041   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:50.534163   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:50.535329   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:50.947259   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:50.976259   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:51.034564   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:51.036177   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:51.453056   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:51.475349   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:51.538321   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:51.540724   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:51.946750   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:51.976268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:52.043662   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:52.045814   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:52.452119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:52.478768   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:52.534079   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:52.538523   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:52.733155   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:52.948096   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:52.976095   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:53.034514   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:53.037687   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:53.447616   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:53.479498   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:53.537047   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:53.541794   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:53.946531   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:53.977036   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:54.035137   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:54.036088   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:54.447243   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:54.475566   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:54.533338   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:54.536239   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:54.947260   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:54.976113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:55.034525   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:55.035340   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:55.232186   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:55.446922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:55.476146   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:55.534566   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:55.536011   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:55.949684   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:55.976405   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:56.033422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:56.035732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:56.447913   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:56.475846   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:56.534097   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:56.536300   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:56.946774   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:56.975969   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:57.034272   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:57.036557   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:57.232638   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:57.446659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:57.476822   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:57.534775   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:57.536440   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:57.946872   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:57.975791   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:58.033912   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:58.035866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:58.446713   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:58.476711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:58.533697   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:58.536301   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:58.947203   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:58.975817   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:59.034080   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:59.036425   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:59.233377   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:59.447137   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:59.475887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:59.534042   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:59.537315   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:59.947181   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:59.975903   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:00.036054   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:00.040391   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:00.447209   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:00.475844   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:00.534580   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:00.537211   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:00.946644   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:00.976127   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:01.034186   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:01.035757   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:01.233578   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:01.449723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:01.476914   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:01.534603   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:01.536352   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:01.948070   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:01.978615   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:02.035013   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:02.036480   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:02.446274   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:02.477591   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:02.536083   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:02.539408   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:02.947092   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:02.976206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:03.034255   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:03.036097   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:03.448090   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:03.477395   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:03.533975   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:03.542745   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:03.732518   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:03.947913   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:03.978673   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:04.034308   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:04.036665   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:04.446732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:04.475083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:04.534999   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:04.537173   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:04.945887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:04.975573   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:05.033524   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:05.036874   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:05.446929   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:05.476890   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:05.533879   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:05.536155   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:05.733511   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:05.950535   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:05.982352   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:06.034251   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:06.038084   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:06.449334   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:06.476130   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:06.535602   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:06.536748   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:06.946865   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:06.976450   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:07.033355   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:07.035780   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:07.446544   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:07.476224   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:07.534244   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:07.536093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:07.734089   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:07.946695   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:07.976834   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:08.033905   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:08.037125   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:08.447795   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:08.475432   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:08.534682   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:08.536221   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:08.946382   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:08.976184   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:09.034577   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:09.036786   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:09.450317   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:09.476168   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:09.536946   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:09.537129   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:09.947721   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:09.976513   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:10.033232   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:10.037337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:10.232624   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:10.447268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:10.475809   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:10.533709   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:10.538473   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:10.948055   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:10.976016   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:11.034094   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:11.039029   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:11.453949   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:11.476566   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:11.534635   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:11.538193   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:11.946590   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:11.976396   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:12.034274   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:12.038487   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:12.233290   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:12.447051   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:12.476072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:12.535450   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:12.537975   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:12.946943   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:12.975661   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:13.033775   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:13.035974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:13.447387   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:13.476408   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:13.534044   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:13.535985   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:13.947107   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:13.975733   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:14.033864   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:14.036353   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:14.446766   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:14.476006   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:14.535898   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:14.536109   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:14.732758   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:14.946307   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:14.975669   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:15.033826   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:15.036161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:15.445720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:15.476500   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:15.534008   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:15.538083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:15.946641   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:15.975953   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:16.033881   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:16.037482   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:16.446265   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:16.475730   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:16.533819   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:16.540204   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:16.945522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:16.975965   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:17.033910   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:17.040060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:17.232091   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:17.446470   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:17.476169   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:17.534228   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:17.536157   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.367410   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:18.374899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.377384   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.377808   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:18.446925   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.475864   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:18.534452   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:18.536978   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.947019   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.976701   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:19.033434   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:19.036724   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:19.233372   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:19.446937   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:19.476156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:19.534548   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:19.535920   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:19.946517   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:19.976831   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:20.034577   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:20.037550   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:20.447482   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:20.476522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:20.533678   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:20.537696   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:20.953810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:20.977223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:21.037516   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:21.037526   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:21.448292   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:21.476324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:21.536554   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:21.540559   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:21.737009   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:21.946715   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:21.975033   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:22.033973   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:22.035537   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:22.802113   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:22.809270   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:22.809939   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:22.812001   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:22.945873   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:22.976999   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:23.034065   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:23.036033   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:23.445836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:23.475385   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:23.533306   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:23.536474   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:23.969135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:23.980654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:24.047282   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:24.053762   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:24.232389   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:24.460176   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:24.475938   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:24.533846   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:24.537723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:24.731797   17336 pod_ready.go:92] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"True"
	I0703 22:50:24.731820   17336 pod_ready.go:81] duration metric: took 1m35.006150001s for pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.731830   17336 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.737848   17336 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace has status "Ready":"True"
	I0703 22:50:24.737866   17336 pod_ready.go:81] duration metric: took 6.029788ms for pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.737885   17336 pod_ready.go:38] duration metric: took 1m36.193250311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:50:24.737903   17336 api_server.go:52] waiting for apiserver process to appear ...
	I0703 22:50:24.737944   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:24.737993   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:24.822203   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:24.822224   17336 cri.go:89] found id: ""
	I0703 22:50:24.822232   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:24.822277   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:24.829085   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:24.829155   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:24.905207   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:24.905230   17336 cri.go:89] found id: ""
	I0703 22:50:24.905238   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:24.905281   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:24.925589   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:24.925643   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:25.005660   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:25.005683   17336 cri.go:89] found id: ""
	I0703 22:50:25.005692   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:25.005746   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.010724   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:25.010773   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:25.061985   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:25.062009   17336 cri.go:89] found id: ""
	I0703 22:50:25.062019   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:25.062093   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.068711   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:25.068780   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:25.130089   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.130447   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.138228   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:25.140040   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:25.180997   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:25.181032   17336 cri.go:89] found id: ""
	I0703 22:50:25.181044   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:25.181102   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.213268   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:25.213331   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:25.325462   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:25.325488   17336 cri.go:89] found id: ""
	I0703 22:50:25.325504   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:25.325550   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.347757   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:25.347822   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:25.409473   17336 cri.go:89] found id: ""
	I0703 22:50:25.409500   17336 logs.go:276] 0 containers: []
	W0703 22:50:25.409512   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:25.409520   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:25.409533   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:25.446429   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.477265   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.494614   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:25.494644   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:25.533949   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:25.536838   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:25.582956   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:25.582990   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:25.946523   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.975845   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.999960   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:25.999995   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0703 22:50:26.035646   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:26.037334   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0703 22:50:26.083096   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083267   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083405   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083554   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:26.103776   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:26.103801   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:26.285398   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:26.285434   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:26.357109   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:26.357147   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:26.416892   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:26.416929   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:26.437323   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:26.437356   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:26.447047   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:26.476810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:26.515490   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:26.515537   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:26.534059   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:26.535531   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:26.625316   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:26.625359   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:26.768987   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:26.769028   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:26.769092   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:26.769106   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769120   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769133   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769144   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:26.769151   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:26.769160   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:26.950107   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:26.975672   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:27.033694   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:27.036992   17336 kapi.go:107] duration metric: took 1m38.506095938s to wait for kubernetes.io/minikube-addons=registry ...
	I0703 22:50:27.446315   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:27.475719   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:27.534016   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:27.952654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:27.976341   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:28.034054   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:28.446433   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:28.477485   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:28.533098   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:28.948017   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:28.975988   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:29.041303   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:29.447722   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:29.476001   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:29.533998   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:29.946581   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:29.976180   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:30.034273   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:30.447950   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:30.475934   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:30.533730   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:30.947483   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:30.977893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:31.033760   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:31.449576   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:31.476609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:31.534639   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:31.945714   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:31.976002   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:32.041907   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:32.447602   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:32.477082   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:32.534845   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:32.946335   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:32.975922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:33.033916   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:33.446387   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:33.477893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:33.533963   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:33.951636   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:33.975986   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:34.034002   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:34.446699   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:34.476160   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:34.534075   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:34.945577   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:34.977765   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:35.034215   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:35.450802   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:35.480165   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:35.533510   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:35.947233   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:35.977744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:36.033632   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:36.455621   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:36.476845   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:36.533495   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:36.770422   17336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:50:36.803895   17336 api_server.go:72] duration metric: took 1m57.15967772s to wait for apiserver process to appear ...
	I0703 22:50:36.803925   17336 api_server.go:88] waiting for apiserver healthz status ...
	I0703 22:50:36.803953   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:36.804007   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:36.884791   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:36.884816   17336 cri.go:89] found id: ""
	I0703 22:50:36.884826   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:36.884882   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:36.891213   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:36.891273   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:36.947218   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:36.962408   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:36.962429   17336 cri.go:89] found id: ""
	I0703 22:50:36.962438   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:36.962492   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:36.967563   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:36.967625   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:36.977124   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:37.034376   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:37.039586   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:37.039604   17336 cri.go:89] found id: ""
	I0703 22:50:37.039611   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:37.039669   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.051800   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:37.051899   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:37.134033   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:37.134052   17336 cri.go:89] found id: ""
	I0703 22:50:37.134061   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:37.134118   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.141221   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:37.141293   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:37.214493   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:37.214516   17336 cri.go:89] found id: ""
	I0703 22:50:37.214523   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:37.214585   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.220005   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:37.220065   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:37.263993   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:37.264018   17336 cri.go:89] found id: ""
	I0703 22:50:37.264027   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:37.264089   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.268739   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:37.268802   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:37.314334   17336 cri.go:89] found id: ""
	I0703 22:50:37.314359   17336 logs.go:276] 0 containers: []
	W0703 22:50:37.314366   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:37.314373   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:37.314384   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:37.373659   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:37.373690   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:37.418095   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:37.418122   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:37.469219   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:37.477491   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:37.477526   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:37.487827   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:37.533419   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:37.565411   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:37.565446   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0703 22:50:37.632687   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.632857   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.632991   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.633138   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:37.656527   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:37.656573   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:37.680228   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:37.680258   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:37.806676   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:37.806712   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:37.947583   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:37.975923   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:38.034422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:38.114716   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:38.114748   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:38.216429   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:38.216461   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:38.450576   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:38.480757   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:38.500584   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:38.500613   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:38.534194   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:38.590054   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:38.590084   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:38.590134   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:38.590146   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590154   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590161   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590168   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:38.590174   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:38.590180   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:38.946470   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:38.976079   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:39.034592   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:39.447744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:39.478094   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:39.533853   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:39.946546   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:39.976325   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:40.039178   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:40.446653   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:40.476179   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:40.537066   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:40.963990   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:40.983324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:41.036210   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:41.446275   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:41.475842   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:41.535534   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:41.947593   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:41.977331   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:42.034508   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:42.447100   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:42.482651   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:42.533659   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:42.948161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:42.976651   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:43.033474   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:43.446558   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:43.476291   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:43.540326   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:43.948530   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:43.976744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:44.033691   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:44.447528   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:44.477980   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:44.533899   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:44.946792   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:44.980206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:45.033900   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:45.446622   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:45.476435   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:45.533761   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:45.945969   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:45.975981   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:46.036366   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:46.447386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:46.476529   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:46.534608   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:46.947727   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:46.978134   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:47.033932   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:47.447113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:47.476488   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:47.533446   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:47.946623   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:47.978537   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.034209   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:48.446625   17336 kapi.go:107] duration metric: took 1m57.506014102s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0703 22:50:48.476167   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.533970   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:48.591426   17336 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0703 22:50:48.596929   17336 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0703 22:50:48.597909   17336 api_server.go:141] control plane version: v1.30.2
	I0703 22:50:48.597929   17336 api_server.go:131] duration metric: took 11.793998606s to wait for apiserver health ...
	I0703 22:50:48.597937   17336 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 22:50:48.597956   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:48.597998   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:48.642355   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:48.642378   17336 cri.go:89] found id: ""
	I0703 22:50:48.642387   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:48.642442   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.647082   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:48.647141   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:48.690501   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:48.690530   17336 cri.go:89] found id: ""
	I0703 22:50:48.690541   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:48.690609   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.694861   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:48.694919   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:48.738851   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:48.738877   17336 cri.go:89] found id: ""
	I0703 22:50:48.738887   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:48.738945   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.743224   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:48.743298   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:48.787368   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:48.787392   17336 cri.go:89] found id: ""
	I0703 22:50:48.787400   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:48.787448   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.792167   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:48.792241   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:48.842186   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:48.842213   17336 cri.go:89] found id: ""
	I0703 22:50:48.842221   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:48.842277   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.846478   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:48.846549   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:48.889257   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:48.889284   17336 cri.go:89] found id: ""
	I0703 22:50:48.889295   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:48.889359   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.894028   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:48.894108   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:48.948768   17336 cri.go:89] found id: ""
	I0703 22:50:48.948793   17336 logs.go:276] 0 containers: []
	W0703 22:50:48.948801   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:48.948809   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:48.948821   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:48.977663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.989746   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:48.989773   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:49.035419   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:49.370921   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:49.370958   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0703 22:50:49.431010   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431178   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431318   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431465   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:49.459429   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:49.459451   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:49.479136   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:49.534954   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:49.582449   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:49.582490   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:49.632358   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:49.632408   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:49.699347   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:49.699395   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:49.765145   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:49.765187   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:49.780726   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:49.780761   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:49.827018   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:49.827051   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:49.877013   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:49.877056   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:49.955986   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:49.956015   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:49.956074   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:49.956090   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956105   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956116   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956128   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:49.956138   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:49.956150   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:49.976460   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:50.033004   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:50.476466   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:50.534748   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:50.976152   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:51.033981   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:51.476198   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:51.534706   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:51.976132   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:52.034029   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:52.478578   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:52.533613   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:52.975749   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:53.033689   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:53.476228   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:53.534804   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:53.976264   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:54.034559   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:54.475823   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:54.534650   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:54.977064   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:55.034751   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:55.476624   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:55.533857   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:55.976303   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:56.034476   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:56.477654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:56.533812   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:56.977119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:57.034746   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:57.477477   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:57.533313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:57.975675   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:58.033547   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:58.482286   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:58.541917   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:58.975943   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.034115   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:59.475571   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.533794   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:59.968951   17336 system_pods.go:59] 18 kube-system pods found
	I0703 22:50:59.968983   17336 system_pods.go:61] "coredns-7db6d8ff4d-4lgcj" [e61e787d-1169-403e-b844-fc0bbd9acd53] Running
	I0703 22:50:59.968988   17336 system_pods.go:61] "csi-hostpath-attacher-0" [a732ee6a-3989-47bc-8045-b0bff06ce3a8] Running
	I0703 22:50:59.968991   17336 system_pods.go:61] "csi-hostpath-resizer-0" [a87e8cb8-6d1c-4ea9-8ed5-f4b57047b25c] Running
	I0703 22:50:59.968994   17336 system_pods.go:61] "csi-hostpathplugin-7m9sj" [a4df336e-c37d-4791-aa35-c5c94fec899d] Running
	I0703 22:50:59.968998   17336 system_pods.go:61] "etcd-addons-224553" [30dfb5b9-60dc-48d6-a7cf-da22586e912f] Running
	I0703 22:50:59.969001   17336 system_pods.go:61] "kube-apiserver-addons-224553" [a41530ad-6337-409d-84af-c9448ccdb391] Running
	I0703 22:50:59.969004   17336 system_pods.go:61] "kube-controller-manager-addons-224553" [3338cf19-da7c-4a93-9a72-75fd5e3a4003] Running
	I0703 22:50:59.969007   17336 system_pods.go:61] "kube-ingress-dns-minikube" [a43e86c9-2281-41ce-a535-a1913563dd49] Running
	I0703 22:50:59.969010   17336 system_pods.go:61] "kube-proxy-ll2cf" [a5b82480-c0ed-4129-b570-a2f3d3a64d9e] Running
	I0703 22:50:59.969013   17336 system_pods.go:61] "kube-scheduler-addons-224553" [35b790ae-c539-416d-8644-8ac5a75be87d] Running
	I0703 22:50:59.969017   17336 system_pods.go:61] "metrics-server-c59844bb4-qv65x" [78c1c74d-f40a-4283-8091-ecace04f1283] Running
	I0703 22:50:59.969021   17336 system_pods.go:61] "nvidia-device-plugin-daemonset-sbhcl" [71040d78-0cef-4e87-863c-271f1ea0dc3f] Running
	I0703 22:50:59.969024   17336 system_pods.go:61] "registry-p9skr" [d68fdfd4-7879-4930-8113-149c5c04b06a] Running
	I0703 22:50:59.969027   17336 system_pods.go:61] "registry-proxy-zj8bk" [2cccffc8-167d-483e-81c9-bcb8a862200f] Running
	I0703 22:50:59.969030   17336 system_pods.go:61] "snapshot-controller-745499f584-jq4z5" [c9adb0c6-984a-498a-8703-b47979144b23] Running
	I0703 22:50:59.969034   17336 system_pods.go:61] "snapshot-controller-745499f584-l6f2b" [519dca42-0117-49cc-90ae-e3b4f43b2a38] Running
	I0703 22:50:59.969037   17336 system_pods.go:61] "storage-provisioner" [05e06fda-a0cf-4385-8cc1-55d7f00dbd4b] Running
	I0703 22:50:59.969041   17336 system_pods.go:61] "tiller-deploy-6677d64bcd-4g4h4" [2a14a1e3-ef96-40b2-b4ba-2790881ec44c] Running
	I0703 22:50:59.969047   17336 system_pods.go:74] duration metric: took 11.371104382s to wait for pod list to return data ...
	I0703 22:50:59.969057   17336 default_sa.go:34] waiting for default service account to be created ...
	I0703 22:50:59.971113   17336 default_sa.go:45] found service account: "default"
	I0703 22:50:59.971132   17336 default_sa.go:55] duration metric: took 2.07021ms for default service account to be created ...
	I0703 22:50:59.971139   17336 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 22:50:59.978059   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.985486   17336 system_pods.go:86] 18 kube-system pods found
	I0703 22:50:59.985523   17336 system_pods.go:89] "coredns-7db6d8ff4d-4lgcj" [e61e787d-1169-403e-b844-fc0bbd9acd53] Running
	I0703 22:50:59.985529   17336 system_pods.go:89] "csi-hostpath-attacher-0" [a732ee6a-3989-47bc-8045-b0bff06ce3a8] Running
	I0703 22:50:59.985533   17336 system_pods.go:89] "csi-hostpath-resizer-0" [a87e8cb8-6d1c-4ea9-8ed5-f4b57047b25c] Running
	I0703 22:50:59.985538   17336 system_pods.go:89] "csi-hostpathplugin-7m9sj" [a4df336e-c37d-4791-aa35-c5c94fec899d] Running
	I0703 22:50:59.985544   17336 system_pods.go:89] "etcd-addons-224553" [30dfb5b9-60dc-48d6-a7cf-da22586e912f] Running
	I0703 22:50:59.985551   17336 system_pods.go:89] "kube-apiserver-addons-224553" [a41530ad-6337-409d-84af-c9448ccdb391] Running
	I0703 22:50:59.985558   17336 system_pods.go:89] "kube-controller-manager-addons-224553" [3338cf19-da7c-4a93-9a72-75fd5e3a4003] Running
	I0703 22:50:59.985565   17336 system_pods.go:89] "kube-ingress-dns-minikube" [a43e86c9-2281-41ce-a535-a1913563dd49] Running
	I0703 22:50:59.985571   17336 system_pods.go:89] "kube-proxy-ll2cf" [a5b82480-c0ed-4129-b570-a2f3d3a64d9e] Running
	I0703 22:50:59.985577   17336 system_pods.go:89] "kube-scheduler-addons-224553" [35b790ae-c539-416d-8644-8ac5a75be87d] Running
	I0703 22:50:59.985585   17336 system_pods.go:89] "metrics-server-c59844bb4-qv65x" [78c1c74d-f40a-4283-8091-ecace04f1283] Running
	I0703 22:50:59.985590   17336 system_pods.go:89] "nvidia-device-plugin-daemonset-sbhcl" [71040d78-0cef-4e87-863c-271f1ea0dc3f] Running
	I0703 22:50:59.985595   17336 system_pods.go:89] "registry-p9skr" [d68fdfd4-7879-4930-8113-149c5c04b06a] Running
	I0703 22:50:59.985599   17336 system_pods.go:89] "registry-proxy-zj8bk" [2cccffc8-167d-483e-81c9-bcb8a862200f] Running
	I0703 22:50:59.985604   17336 system_pods.go:89] "snapshot-controller-745499f584-jq4z5" [c9adb0c6-984a-498a-8703-b47979144b23] Running
	I0703 22:50:59.985608   17336 system_pods.go:89] "snapshot-controller-745499f584-l6f2b" [519dca42-0117-49cc-90ae-e3b4f43b2a38] Running
	I0703 22:50:59.985614   17336 system_pods.go:89] "storage-provisioner" [05e06fda-a0cf-4385-8cc1-55d7f00dbd4b] Running
	I0703 22:50:59.985618   17336 system_pods.go:89] "tiller-deploy-6677d64bcd-4g4h4" [2a14a1e3-ef96-40b2-b4ba-2790881ec44c] Running
	I0703 22:50:59.985627   17336 system_pods.go:126] duration metric: took 14.483606ms to wait for k8s-apps to be running ...
	I0703 22:50:59.985636   17336 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 22:50:59.985682   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 22:51:00.001367   17336 system_svc.go:56] duration metric: took 15.722387ms WaitForService to wait for kubelet
	I0703 22:51:00.001394   17336 kubeadm.go:576] duration metric: took 2m20.357181851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:51:00.001419   17336 node_conditions.go:102] verifying NodePressure condition ...
	I0703 22:51:00.006620   17336 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 22:51:00.006648   17336 node_conditions.go:123] node cpu capacity is 2
	I0703 22:51:00.006660   17336 node_conditions.go:105] duration metric: took 5.236656ms to run NodePressure ...
	I0703 22:51:00.006673   17336 start.go:240] waiting for startup goroutines ...
	I0703 22:51:00.035655   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:00.479975   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:00.533866   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:00.976435   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:01.034426   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:01.475312   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:01.534000   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:01.976797   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:02.033723   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:02.476256   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:02.534882   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:02.977247   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:03.034422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:03.476893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:03.534639   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:03.976528   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:04.033786   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:04.475841   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:04.534031   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:04.978120   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:05.034313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:05.475388   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:05.534367   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:05.976403   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:06.033423   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:06.476170   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:06.533926   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:06.975768   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:07.033807   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:07.476043   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:07.534225   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:07.977524   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:08.037803   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:08.497373   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:08.533751   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:08.976697   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:09.033700   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:09.896609   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:09.896763   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:09.976136   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:10.034019   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:10.476180   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:10.534170   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:10.975802   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:11.033823   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:11.475680   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:11.533740   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.421597   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:12.422395   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.482549   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:12.542004   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.976712   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:13.036568   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:13.475685   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:13.533943   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:13.976395   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:14.033182   17336 kapi.go:107] duration metric: took 2m25.504054698s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0703 22:51:14.476491   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:14.976551   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:15.476854   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:15.976438   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:16.475981   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:16.976612   17336 kapi.go:107] duration metric: took 2m24.504657336s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0703 22:51:16.978472   17336 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-224553 cluster.
	I0703 22:51:16.979716   17336 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0703 22:51:16.980847   17336 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0703 22:51:16.981963   17336 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0703 22:51:16.983000   17336 addons.go:510] duration metric: took 2m37.338723052s for enable addons: enabled=[default-storageclass nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0703 22:51:16.983027   17336 start.go:245] waiting for cluster config update ...
	I0703 22:51:16.983043   17336 start.go:254] writing updated cluster config ...
	I0703 22:51:16.983270   17336 ssh_runner.go:195] Run: rm -f paused
	I0703 22:51:17.033658   17336 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 22:51:17.035567   17336 out.go:177] * Done! kubectl is now configured to use "addons-224553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.445738776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047252445700999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f997cb8-f882-4787-8a4b-eb721e7a98c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.447508901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f189d106-49ca-47c1-b584-3c283c75275f name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.447578580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f189d106-49ca-47c1-b584-3c283c75275f name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.448000081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767852826cab87e7a3c4a29c10a8cd1ddf6af9e537585131711a4748b8dd911b,PodSandboxId:e9a93233fdc223fc48d4eaf0a11052ffff898aa9fb1bd42966de890085974dff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720047033798119047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fnl5l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ad25aa6-3a2f-4661-afbf-082bab0c83b8,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2d0f6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea61d4c984c36016a00c40a1509c309a4a453fb58ca1325ec5573b5545738,PodSandboxId:5aba474bb33da205f0eabc3fc03b9207f8f9513a647b6b427a555b8919c528a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720047033700560425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdj5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a6b99aa9-35bf-4de1-b466-3cfd07fa2791,},Annotations:map[string]string{io.kubernetes.container.hash: 791c3919,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720047019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSandboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625
a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b7590bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:
617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57
cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249b
cf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f189d106-49ca-47c1-b584-3c283c75275f name=/runtime.v1.RuntimeService
/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.491571275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb5842ac-6d55-410e-a6a8-6d272c300845 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.491651441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb5842ac-6d55-410e-a6a8-6d272c300845 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.493092262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58df1df0-41ef-4c74-9c2c-f3c5aa9e2be4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.494603924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047252494572665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58df1df0-41ef-4c74-9c2c-f3c5aa9e2be4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.495531619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c270f7b-a744-46e8-8534-0442b79c6953 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.495597371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c270f7b-a744-46e8-8534-0442b79c6953 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.495988924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767852826cab87e7a3c4a29c10a8cd1ddf6af9e537585131711a4748b8dd911b,PodSandboxId:e9a93233fdc223fc48d4eaf0a11052ffff898aa9fb1bd42966de890085974dff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720047033798119047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fnl5l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ad25aa6-3a2f-4661-afbf-082bab0c83b8,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2d0f6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea61d4c984c36016a00c40a1509c309a4a453fb58ca1325ec5573b5545738,PodSandboxId:5aba474bb33da205f0eabc3fc03b9207f8f9513a647b6b427a555b8919c528a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720047033700560425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdj5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a6b99aa9-35bf-4de1-b466-3cfd07fa2791,},Annotations:map[string]string{io.kubernetes.container.hash: 791c3919,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720047019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSandboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625
a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b7590bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:
617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57
cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249b
cf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c270f7b-a744-46e8-8534-0442b79c6953 name=/runtime.v1.RuntimeService
/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.533365077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=836ce1cd-689b-44f6-ae64-cc93d2ca35f2 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.533458530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=836ce1cd-689b-44f6-ae64-cc93d2ca35f2 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.534970576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=238c9bb2-35a6-4dd1-8845-9c612c0c6562 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.536587348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047252536554012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=238c9bb2-35a6-4dd1-8845-9c612c0c6562 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.537126489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b4433dc-2441-4b16-8c7e-5de2d010c156 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.537181116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b4433dc-2441-4b16-8c7e-5de2d010c156 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.537652684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767852826cab87e7a3c4a29c10a8cd1ddf6af9e537585131711a4748b8dd911b,PodSandboxId:e9a93233fdc223fc48d4eaf0a11052ffff898aa9fb1bd42966de890085974dff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720047033798119047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fnl5l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ad25aa6-3a2f-4661-afbf-082bab0c83b8,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2d0f6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea61d4c984c36016a00c40a1509c309a4a453fb58ca1325ec5573b5545738,PodSandboxId:5aba474bb33da205f0eabc3fc03b9207f8f9513a647b6b427a555b8919c528a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720047033700560425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdj5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a6b99aa9-35bf-4de1-b466-3cfd07fa2791,},Annotations:map[string]string{io.kubernetes.container.hash: 791c3919,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720047019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSandboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625
a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b7590bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:
617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57
cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249b
cf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b4433dc-2441-4b16-8c7e-5de2d010c156 name=/runtime.v1.RuntimeService
/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.583063350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b36faa2d-92de-472f-9024-210fbcdec3c1 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.583145924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b36faa2d-92de-472f-9024-210fbcdec3c1 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.585071871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7b43a45-95b2-4329-aa88-b50cafff1dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.586398433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047252586364481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7b43a45-95b2-4329-aa88-b50cafff1dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.586930746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c43d207-49ee-4f96-80ea-318fdf3d002e name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.587001813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c43d207-49ee-4f96-80ea-318fdf3d002e name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:54:12 addons-224553 crio[682]: time="2024-07-03 22:54:12.587619512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767852826cab87e7a3c4a29c10a8cd1ddf6af9e537585131711a4748b8dd911b,PodSandboxId:e9a93233fdc223fc48d4eaf0a11052ffff898aa9fb1bd42966de890085974dff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720047033798119047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fnl5l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ad25aa6-3a2f-4661-afbf-082bab0c83b8,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2d0f6c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea61d4c984c36016a00c40a1509c309a4a453fb58ca1325ec5573b5545738,PodSandboxId:5aba474bb33da205f0eabc3fc03b9207f8f9513a647b6b427a555b8919c528a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720047033700560425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdj5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a6b99aa9-35bf-4de1-b466-3cfd07fa2791,},Annotations:map[string]string{io.kubernetes.container.hash: 791c3919,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720047019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSandboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625
a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b7590bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:
617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57
cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249b
cf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c43d207-49ee-4f96-80ea-318fdf3d002e name=/runtime.v1.RuntimeService
/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f52235fa0a961       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   68614950f41c5       hello-world-app-86c47465fc-bp4c7
	a22a23f9b9158       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   41c255833c60d       headlamp-7867546754-jgcbc
	ad572c678c9cf       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   e1acd9efaa593       nginx
	0145cba5261b4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   abecec9194946       gcp-auth-5db96cd9b4-r8pwn
	767852826cab8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   e9a93233fdc22       ingress-nginx-admission-patch-fnl5l
	9faea61d4c984       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   5aba474bb33da       ingress-nginx-admission-create-qdj5k
	04c93eab29614       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   607376c1fc0a8       yakd-dashboard-799879c74f-fwg4s
	82e7e7a13c49e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   4b3997824769e       metrics-server-c59844bb4-qv65x
	89f8ea56161fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   81dad2b1935bc       storage-provisioner
	9c8f870aa5bc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   344665b6c1ed8       coredns-7db6d8ff4d-4lgcj
	b081398a9d47a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             5 minutes ago       Running             kube-proxy                0                   84559595da5fb       kube-proxy-ll2cf
	aaf27a803ff5d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             5 minutes ago       Running             kube-controller-manager   0                   f49b94871cf9d       kube-controller-manager-addons-224553
	4f21a3a13f52a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   073d359ecbe7d       etcd-addons-224553
	ca14b1cc58451       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             5 minutes ago       Running             kube-scheduler            0                   617b4d9325127       kube-scheduler-addons-224553
	e547072b66d6f       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             5 minutes ago       Running             kube-apiserver            0                   dc4e249bcf076       kube-apiserver-addons-224553
	
	
	==> coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] <==
	[INFO] 10.244.0.7:49913 - 42337 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000234321s
	[INFO] 10.244.0.7:54572 - 7516 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147321s
	[INFO] 10.244.0.7:54572 - 20305 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123031s
	[INFO] 10.244.0.7:44451 - 39667 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000367579s
	[INFO] 10.244.0.7:44451 - 65526 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000423709s
	[INFO] 10.244.0.7:34244 - 36742 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000190089s
	[INFO] 10.244.0.7:34244 - 63616 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000426108s
	[INFO] 10.244.0.7:60229 - 32658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00013172s
	[INFO] 10.244.0.7:60229 - 4255 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144353s
	[INFO] 10.244.0.7:48420 - 13213 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051056s
	[INFO] 10.244.0.7:48420 - 23195 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044906s
	[INFO] 10.244.0.7:34551 - 33947 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058016s
	[INFO] 10.244.0.7:34551 - 43165 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076825s
	[INFO] 10.244.0.7:43797 - 4299 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056715s
	[INFO] 10.244.0.7:43797 - 30925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086942s
	[INFO] 10.244.0.22:37776 - 32030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000443647s
	[INFO] 10.244.0.22:48999 - 16478 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0023847s
	[INFO] 10.244.0.22:40808 - 37514 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017725s
	[INFO] 10.244.0.22:44420 - 21317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135901s
	[INFO] 10.244.0.22:56411 - 8575 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118728s
	[INFO] 10.244.0.22:51551 - 65329 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058754s
	[INFO] 10.244.0.22:37559 - 22003 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001961516s
	[INFO] 10.244.0.22:41883 - 32017 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002113422s
	[INFO] 10.244.0.26:55107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000362618s
	[INFO] 10.244.0.26:40518 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149917s
	
	
	==> describe nodes <==
	Name:               addons-224553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-224553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=addons-224553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T22_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-224553
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 22:48:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-224553
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 22:54:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 22:53:02 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 22:53:02 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 22:53:02 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 22:53:02 +0000   Wed, 03 Jul 2024 22:48:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    addons-224553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b4c35a1e2054f838c54e7ae0a0c423a
	  System UUID:                6b4c35a1-e205-4f83-8c54-e7ae0a0c423a
	  Boot ID:                    9c5f331b-d918-4ede-b228-99b4a7bc0ad8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-bp4c7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-r8pwn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  headlamp                    headlamp-7867546754-jgcbc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 coredns-7db6d8ff4d-4lgcj                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m33s
	  kube-system                 etcd-addons-224553                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-apiserver-addons-224553             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-controller-manager-addons-224553    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-proxy-ll2cf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-scheduler-addons-224553             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 metrics-server-c59844bb4-qv65x           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  yakd-dashboard              yakd-dashboard-799879c74f-fwg4s          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m32s  kube-proxy       
	  Normal  Starting                 5m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s  kubelet          Node addons-224553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s  kubelet          Node addons-224553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s  kubelet          Node addons-224553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m46s  kubelet          Node addons-224553 status is now: NodeReady
	  Normal  RegisteredNode           5m33s  node-controller  Node addons-224553 event: Registered Node addons-224553 in Controller
	
	
	==> dmesg <==
	[ +14.743361] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.062479] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +5.157176] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.390538] kauditd_printk_skb: 143 callbacks suppressed
	[  +9.350073] kauditd_printk_skb: 78 callbacks suppressed
	[Jul 3 22:50] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.884276] kauditd_printk_skb: 30 callbacks suppressed
	[ +18.515783] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.847190] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.085747] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.007967] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 3 22:51] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.188809] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.212263] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.158047] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.059286] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.094660] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.006485] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.235463] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.634146] kauditd_printk_skb: 8 callbacks suppressed
	[Jul 3 22:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.790821] kauditd_printk_skb: 2 callbacks suppressed
	[Jul 3 22:53] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.496823] kauditd_printk_skb: 33 callbacks suppressed
	[Jul 3 22:54] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] <==
	{"level":"warn","ts":"2024-07-03T22:51:12.404121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.596267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-03T22:51:12.405514Z","caller":"traceutil/trace.go:171","msg":"trace[1022240753] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1225; }","duration":"302.018515ms","start":"2024-07-03T22:51:12.103485Z","end":"2024-07-03T22:51:12.405504Z","steps":["trace[1022240753] 'range keys from in-memory index tree'  (duration: 300.549807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:51:12.40554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:51:12.103473Z","time spent":"302.058721ms","remote":"127.0.0.1:48534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-03T22:51:12.405886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"383.992825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-03T22:51:12.405989Z","caller":"traceutil/trace.go:171","msg":"trace[1178731318] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1225; }","duration":"384.118289ms","start":"2024-07-03T22:51:12.021859Z","end":"2024-07-03T22:51:12.405977Z","steps":["trace[1178731318] 'range keys from in-memory index tree'  (duration: 383.83316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:51:12.406021Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:51:12.02184Z","time spent":"384.173292ms","remote":"127.0.0.1:51216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-03T22:51:43.450919Z","caller":"traceutil/trace.go:171","msg":"trace[1600560405] transaction","detail":"{read_only:false; response_revision:1484; number_of_response:1; }","duration":"206.785947ms","start":"2024-07-03T22:51:43.244116Z","end":"2024-07-03T22:51:43.450902Z","steps":["trace[1600560405] 'process raft request'  (duration: 206.66574ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:43.451285Z","caller":"traceutil/trace.go:171","msg":"trace[1120301592] linearizableReadLoop","detail":"{readStateIndex:1544; appliedIndex:1544; }","duration":"107.117578ms","start":"2024-07-03T22:51:43.344154Z","end":"2024-07-03T22:51:43.451272Z","steps":["trace[1120301592] 'read index received'  (duration: 107.114704ms)","trace[1120301592] 'applied index is now lower than readState.Index'  (duration: 2.248µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T22:51:43.451526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.345068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-03T22:51:43.451571Z","caller":"traceutil/trace.go:171","msg":"trace[628866255] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1484; }","duration":"107.433621ms","start":"2024-07-03T22:51:43.344128Z","end":"2024-07-03T22:51:43.451561Z","steps":["trace[628866255] 'agreement among raft nodes before linearized reading'  (duration: 107.303572ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:53.72846Z","caller":"traceutil/trace.go:171","msg":"trace[1921893865] linearizableReadLoop","detail":"{readStateIndex:1611; appliedIndex:1610; }","duration":"149.291025ms","start":"2024-07-03T22:51:53.579144Z","end":"2024-07-03T22:51:53.728435Z","steps":["trace[1921893865] 'read index received'  (duration: 149.063003ms)","trace[1921893865] 'applied index is now lower than readState.Index'  (duration: 227.36µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T22:51:53.728646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.469237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-03T22:51:53.728677Z","caller":"traceutil/trace.go:171","msg":"trace[36344324] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1549; }","duration":"149.543312ms","start":"2024-07-03T22:51:53.579119Z","end":"2024-07-03T22:51:53.728663Z","steps":["trace[36344324] 'agreement among raft nodes before linearized reading'  (duration: 149.433415ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:53.728908Z","caller":"traceutil/trace.go:171","msg":"trace[1328688854] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"228.251489ms","start":"2024-07-03T22:51:53.50065Z","end":"2024-07-03T22:51:53.728901Z","steps":["trace[1328688854] 'process raft request'  (duration: 227.600807ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:52:24.068616Z","caller":"traceutil/trace.go:171","msg":"trace[968524882] linearizableReadLoop","detail":"{readStateIndex:1709; appliedIndex:1708; }","duration":"100.99671ms","start":"2024-07-03T22:52:23.967577Z","end":"2024-07-03T22:52:24.068574Z","steps":["trace[968524882] 'read index received'  (duration: 100.903086ms)","trace[968524882] 'applied index is now lower than readState.Index'  (duration: 93.119µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T22:52:24.068842Z","caller":"traceutil/trace.go:171","msg":"trace[1750869291] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"275.978375ms","start":"2024-07-03T22:52:23.792848Z","end":"2024-07-03T22:52:24.068827Z","steps":["trace[1750869291] 'process raft request'  (duration: 275.615725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:24.068883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.26031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-03T22:52:24.06891Z","caller":"traceutil/trace.go:171","msg":"trace[12657442] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1640; }","duration":"101.394159ms","start":"2024-07-03T22:52:23.967507Z","end":"2024-07-03T22:52:24.068902Z","steps":["trace[12657442] 'agreement among raft nodes before linearized reading'  (duration: 101.218394ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:52:56.928757Z","caller":"traceutil/trace.go:171","msg":"trace[503711177] linearizableReadLoop","detail":"{readStateIndex:1814; appliedIndex:1813; }","duration":"215.833758ms","start":"2024-07-03T22:52:56.712909Z","end":"2024-07-03T22:52:56.928743Z","steps":["trace[503711177] 'read index received'  (duration: 215.686327ms)","trace[503711177] 'applied index is now lower than readState.Index'  (duration: 146.927µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T22:52:56.928932Z","caller":"traceutil/trace.go:171","msg":"trace[1054765434] transaction","detail":"{read_only:false; response_revision:1736; number_of_response:1; }","duration":"344.218674ms","start":"2024-07-03T22:52:56.584705Z","end":"2024-07-03T22:52:56.928924Z","steps":["trace[1054765434] 'process raft request'  (duration: 343.927855ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:56.929069Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:52:56.584687Z","time spent":"344.28785ms","remote":"127.0.0.1:51194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1734 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-03T22:52:56.929137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.264717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-07-03T22:52:56.929185Z","caller":"traceutil/trace.go:171","msg":"trace[797791451] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1736; }","duration":"216.343962ms","start":"2024-07-03T22:52:56.712832Z","end":"2024-07-03T22:52:56.929176Z","steps":["trace[797791451] 'agreement among raft nodes before linearized reading'  (duration: 216.276552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:56.929072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.101657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6126"}
	{"level":"info","ts":"2024-07-03T22:52:56.929414Z","caller":"traceutil/trace.go:171","msg":"trace[207312996] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1736; }","duration":"169.467353ms","start":"2024-07-03T22:52:56.759937Z","end":"2024-07-03T22:52:56.929404Z","steps":["trace[207312996] 'agreement among raft nodes before linearized reading'  (duration: 169.051631ms)"],"step_count":1}
	
	
	==> gcp-auth [0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d] <==
	2024/07/03 22:51:16 GCP Auth Webhook started!
	2024/07/03 22:51:17 Ready to marshal response ...
	2024/07/03 22:51:17 Ready to write response ...
	2024/07/03 22:51:17 Ready to marshal response ...
	2024/07/03 22:51:17 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:38 Ready to marshal response ...
	2024/07/03 22:51:38 Ready to write response ...
	2024/07/03 22:51:47 Ready to marshal response ...
	2024/07/03 22:51:47 Ready to write response ...
	2024/07/03 22:51:48 Ready to marshal response ...
	2024/07/03 22:51:48 Ready to write response ...
	2024/07/03 22:51:48 Ready to marshal response ...
	2024/07/03 22:51:48 Ready to write response ...
	2024/07/03 22:52:18 Ready to marshal response ...
	2024/07/03 22:52:18 Ready to write response ...
	2024/07/03 22:52:51 Ready to marshal response ...
	2024/07/03 22:52:51 Ready to write response ...
	2024/07/03 22:54:02 Ready to marshal response ...
	2024/07/03 22:54:02 Ready to write response ...
	
	
	==> kernel <==
	 22:54:12 up 6 min,  0 users,  load average: 0.74, 0.98, 0.53
	Linux addons-224553 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] <==
	I0703 22:51:09.872367       1 trace.go:236] Trace[909029693]: "Update" accept:application/json, */*,audit-id:dbcdf4e7-380b-4ea5-803f-a6c49bed38a6,client:192.168.39.226,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (03-Jul-2024 22:51:09.364) (total time: 508ms):
	Trace[909029693]: ["GuaranteedUpdate etcd3" audit-id:dbcdf4e7-380b-4ea5-803f-a6c49bed38a6,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 507ms (22:51:09.364)
	Trace[909029693]:  ---"Txn call completed" 506ms (22:51:09.872)]
	Trace[909029693]: [508.186357ms] [508.186357ms] END
	I0703 22:51:38.379257       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0703 22:51:38.579647       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.146.183"}
	I0703 22:51:41.880458       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0703 22:51:42.974487       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0703 22:51:44.897202       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0703 22:51:47.969178       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.34.150"}
	I0703 22:52:34.038883       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0703 22:53:09.139676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.139806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.170282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.170463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.194099       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.194370       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.198844       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.198901       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.234585       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.234635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0703 22:53:10.199506       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0703 22:53:10.235667       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0703 22:53:10.244531       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0703 22:54:02.736525       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.0.197"}
	
	
	==> kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] <==
	W0703 22:53:18.635453       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:18.635504       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:25.698910       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:25.699009       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:25.796975       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:25.797079       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:25.900843       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:25.900889       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:38.810757       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:38.810886       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:40.408581       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:40.408635       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:41.056967       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:41.057079       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:53:49.684497       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:53:49.684713       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0703 22:54:02.487090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.108242ms"
	I0703 22:54:02.514096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.929356ms"
	I0703 22:54:02.517030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="70.982µs"
	I0703 22:54:02.527522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="36.437µs"
	I0703 22:54:04.573572       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0703 22:54:04.580002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="13.705µs"
	I0703 22:54:04.585851       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0703 22:54:06.845465       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="7.324942ms"
	I0703 22:54:06.846060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="147.93µs"
	
	
	==> kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] <==
	I0703 22:48:40.636097       1 server_linux.go:69] "Using iptables proxy"
	I0703 22:48:40.667869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.226"]
	I0703 22:48:40.758169       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 22:48:40.758206       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 22:48:40.758220       1 server_linux.go:165] "Using iptables Proxier"
	I0703 22:48:40.764443       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 22:48:40.764664       1 server.go:872] "Version info" version="v1.30.2"
	I0703 22:48:40.764683       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 22:48:40.767686       1 config.go:192] "Starting service config controller"
	I0703 22:48:40.767736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 22:48:40.767767       1 config.go:101] "Starting endpoint slice config controller"
	I0703 22:48:40.767770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 22:48:40.770878       1 config.go:319] "Starting node config controller"
	I0703 22:48:40.770887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 22:48:40.868065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 22:48:40.868110       1 shared_informer.go:320] Caches are synced for service config
	I0703 22:48:40.871123       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] <==
	W0703 22:48:22.642539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0703 22:48:22.643592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0703 22:48:22.642807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 22:48:22.643659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 22:48:22.642923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 22:48:22.643342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 22:48:23.622098       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 22:48:23.622198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 22:48:23.797875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:23.797932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 22:48:23.804137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 22:48:23.804207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0703 22:48:23.827032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 22:48:23.827082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 22:48:23.908069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 22:48:23.908115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 22:48:23.938356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:23.938474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 22:48:23.943264       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 22:48:23.943981       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0703 22:48:26.733461       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 03 22:54:02 addons-224553 kubelet[1274]: I0703 22:54:02.496782    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4df336e-c37d-4791-aa35-c5c94fec899d" containerName="hostpath"
	Jul 03 22:54:02 addons-224553 kubelet[1274]: I0703 22:54:02.496788    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4df336e-c37d-4791-aa35-c5c94fec899d" containerName="csi-provisioner"
	Jul 03 22:54:02 addons-224553 kubelet[1274]: I0703 22:54:02.546635    1274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2kxm\" (UniqueName: \"kubernetes.io/projected/247bf170-0735-4073-ae3b-a13c60e4856e-kube-api-access-j2kxm\") pod \"hello-world-app-86c47465fc-bp4c7\" (UID: \"247bf170-0735-4073-ae3b-a13c60e4856e\") " pod="default/hello-world-app-86c47465fc-bp4c7"
	Jul 03 22:54:02 addons-224553 kubelet[1274]: I0703 22:54:02.546714    1274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/247bf170-0735-4073-ae3b-a13c60e4856e-gcp-creds\") pod \"hello-world-app-86c47465fc-bp4c7\" (UID: \"247bf170-0735-4073-ae3b-a13c60e4856e\") " pod="default/hello-world-app-86c47465fc-bp4c7"
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.759209    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm967\" (UniqueName: \"kubernetes.io/projected/a43e86c9-2281-41ce-a535-a1913563dd49-kube-api-access-pm967\") pod \"a43e86c9-2281-41ce-a535-a1913563dd49\" (UID: \"a43e86c9-2281-41ce-a535-a1913563dd49\") "
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.761419    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43e86c9-2281-41ce-a535-a1913563dd49-kube-api-access-pm967" (OuterVolumeSpecName: "kube-api-access-pm967") pod "a43e86c9-2281-41ce-a535-a1913563dd49" (UID: "a43e86c9-2281-41ce-a535-a1913563dd49"). InnerVolumeSpecName "kube-api-access-pm967". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.803659    1274 scope.go:117] "RemoveContainer" containerID="66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4"
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.836699    1274 scope.go:117] "RemoveContainer" containerID="66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4"
	Jul 03 22:54:03 addons-224553 kubelet[1274]: E0703 22:54:03.837745    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4\": container with ID starting with 66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4 not found: ID does not exist" containerID="66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4"
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.837783    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4"} err="failed to get container status \"66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4\": rpc error: code = NotFound desc = could not find container \"66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4\": container with ID starting with 66761fbbb2fae64eefb6c2d7716ad9e25d45fd50136ee29d6f9d4b7b29faedf4 not found: ID does not exist"
	Jul 03 22:54:03 addons-224553 kubelet[1274]: I0703 22:54:03.860355    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pm967\" (UniqueName: \"kubernetes.io/projected/a43e86c9-2281-41ce-a535-a1913563dd49-kube-api-access-pm967\") on node \"addons-224553\" DevicePath \"\""
	Jul 03 22:54:05 addons-224553 kubelet[1274]: I0703 22:54:05.273380    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad25aa6-3a2f-4661-afbf-082bab0c83b8" path="/var/lib/kubelet/pods/2ad25aa6-3a2f-4661-afbf-082bab0c83b8/volumes"
	Jul 03 22:54:05 addons-224553 kubelet[1274]: I0703 22:54:05.273769    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43e86c9-2281-41ce-a535-a1913563dd49" path="/var/lib/kubelet/pods/a43e86c9-2281-41ce-a535-a1913563dd49/volumes"
	Jul 03 22:54:05 addons-224553 kubelet[1274]: I0703 22:54:05.274830    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b99aa9-35bf-4de1-b466-3cfd07fa2791" path="/var/lib/kubelet/pods/a6b99aa9-35bf-4de1-b466-3cfd07fa2791/volumes"
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.790063    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vww7\" (UniqueName: \"kubernetes.io/projected/6034c1ae-111a-424f-b9df-4e5c4d7e133c-kube-api-access-7vww7\") pod \"6034c1ae-111a-424f-b9df-4e5c4d7e133c\" (UID: \"6034c1ae-111a-424f-b9df-4e5c4d7e133c\") "
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.790126    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6034c1ae-111a-424f-b9df-4e5c4d7e133c-webhook-cert\") pod \"6034c1ae-111a-424f-b9df-4e5c4d7e133c\" (UID: \"6034c1ae-111a-424f-b9df-4e5c4d7e133c\") "
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.798172    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6034c1ae-111a-424f-b9df-4e5c4d7e133c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6034c1ae-111a-424f-b9df-4e5c4d7e133c" (UID: "6034c1ae-111a-424f-b9df-4e5c4d7e133c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.799919    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6034c1ae-111a-424f-b9df-4e5c4d7e133c-kube-api-access-7vww7" (OuterVolumeSpecName: "kube-api-access-7vww7") pod "6034c1ae-111a-424f-b9df-4e5c4d7e133c" (UID: "6034c1ae-111a-424f-b9df-4e5c4d7e133c"). InnerVolumeSpecName "kube-api-access-7vww7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.833528    1274 scope.go:117] "RemoveContainer" containerID="7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e"
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.856972    1274 scope.go:117] "RemoveContainer" containerID="7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e"
	Jul 03 22:54:07 addons-224553 kubelet[1274]: E0703 22:54:07.857876    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e\": container with ID starting with 7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e not found: ID does not exist" containerID="7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e"
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.858038    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e"} err="failed to get container status \"7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e\": rpc error: code = NotFound desc = could not find container \"7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e\": container with ID starting with 7982d271cd990903b931d4459d72ea65f44a2785d5a8ce575ce6a00a791ac44e not found: ID does not exist"
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.890989    1274 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6034c1ae-111a-424f-b9df-4e5c4d7e133c-webhook-cert\") on node \"addons-224553\" DevicePath \"\""
	Jul 03 22:54:07 addons-224553 kubelet[1274]: I0703 22:54:07.891025    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7vww7\" (UniqueName: \"kubernetes.io/projected/6034c1ae-111a-424f-b9df-4e5c4d7e133c-kube-api-access-7vww7\") on node \"addons-224553\" DevicePath \"\""
	Jul 03 22:54:09 addons-224553 kubelet[1274]: I0703 22:54:09.265499    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6034c1ae-111a-424f-b9df-4e5c4d7e133c" path="/var/lib/kubelet/pods/6034c1ae-111a-424f-b9df-4e5c4d7e133c/volumes"
	
	
	==> storage-provisioner [89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458] <==
	I0703 22:48:48.950151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 22:48:49.035195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 22:48:49.037525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0703 22:48:49.060955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0703 22:48:49.069240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40cae642-3d03-41f1-8256-6b1ca176ed1d", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26 became leader
	I0703 22:48:49.070883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26!
	I0703 22:48:49.171578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-224553 -n addons-224553
helpers_test.go:261: (dbg) Run:  kubectl --context addons-224553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (342.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.458797ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-qv65x" [78c1c74d-f40a-4283-8091-ecace04f1283] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.01216999s
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (68.19082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 2m50.7092167s

                                                
                                                
** /stderr **
I0703 22:51:29.710607   16574 retry.go:31] will retry after 2.442431515s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (74.944143ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 2m53.228749752s

                                                
                                                
** /stderr **
I0703 22:51:32.230300   16574 retry.go:31] will retry after 4.394680103s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (66.358741ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 2m57.690436681s

                                                
                                                
** /stderr **
I0703 22:51:36.691932   16574 retry.go:31] will retry after 6.454748001s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (84.682091ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 3m4.230818981s

                                                
                                                
** /stderr **
I0703 22:51:43.232405   16574 retry.go:31] will retry after 11.494171593s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (98.156572ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 3m15.823432986s

                                                
                                                
** /stderr **
I0703 22:51:54.824988   16574 retry.go:31] will retry after 15.944086882s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (72.939486ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 3m31.841442029s

                                                
                                                
** /stderr **
I0703 22:52:10.843174   16574 retry.go:31] will retry after 13.840066567s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (68.448164ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 3m45.750845827s

                                                
                                                
** /stderr **
I0703 22:52:24.752820   16574 retry.go:31] will retry after 24.882825979s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (70.223378ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 4m10.70533803s

                                                
                                                
** /stderr **
I0703 22:52:49.707098   16574 retry.go:31] will retry after 39.861551857s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (62.234477ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 4m50.632049942s

                                                
                                                
** /stderr **
I0703 22:53:29.633571   16574 retry.go:31] will retry after 49.190753751s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (62.967608ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 5m39.887588119s

                                                
                                                
** /stderr **
I0703 22:54:18.889167   16574 retry.go:31] will retry after 1m0.592402315s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (67.419866ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 6m40.549423155s

                                                
                                                
** /stderr **
I0703 22:55:19.551008   16574 retry.go:31] will retry after 52.993140035s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (70.395863ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 7m33.618427217s

                                                
                                                
** /stderr **
I0703 22:56:12.619970   16574 retry.go:31] will retry after 50.213038454s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-224553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-224553 top pods -n kube-system: exit status 1 (66.700247ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4lgcj, age: 8m23.902654201s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-224553 -n addons-224553
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-224553 logs -n 25: (1.532309148s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-240360                                                                     | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-666511                                                                     | download-only-666511 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-240360                                                                     | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-921043 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | binary-mirror-921043                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39145                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-921043                                                                     | binary-mirror-921043 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-224553 --wait=true                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | -p addons-224553                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-224553 ssh cat                                                                       | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | /opt/local-path-provisioner/pvc-3109b72f-6268-4949-88ee-62863ae03b8a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-224553 ip                                                                            | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC | 03 Jul 24 22:51 UTC |
	|         | -p addons-224553                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-224553 ssh curl -s                                                                   | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:52 UTC | 03 Jul 24 22:52 UTC |
	|         | addons-224553                                                                               |                      |         |         |                     |                     |
	| addons  | addons-224553 addons                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-224553 addons                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-224553 ip                                                                            | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-224553 addons disable                                                                | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:54 UTC | 03 Jul 24 22:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-224553 addons                                                                        | addons-224553        | jenkins | v1.33.1 | 03 Jul 24 22:57 UTC | 03 Jul 24 22:57 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:47:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:47:44.239130   17336 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:47:44.239267   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:44.239277   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:47:44.239284   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:44.239490   17336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 22:47:44.240113   17336 out.go:298] Setting JSON to false
	I0703 22:47:44.240903   17336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1804,"bootTime":1720045060,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:47:44.240963   17336 start.go:139] virtualization: kvm guest
	I0703 22:47:44.243247   17336 out.go:177] * [addons-224553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:47:44.244809   17336 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 22:47:44.244809   17336 notify.go:220] Checking for updates...
	I0703 22:47:44.246262   17336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:47:44.247628   17336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:47:44.249031   17336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:44.250339   17336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 22:47:44.251461   17336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 22:47:44.252714   17336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:47:44.285181   17336 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 22:47:44.286416   17336 start.go:297] selected driver: kvm2
	I0703 22:47:44.286452   17336 start.go:901] validating driver "kvm2" against <nil>
	I0703 22:47:44.286469   17336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 22:47:44.287156   17336 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:44.287225   17336 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:47:44.302745   17336 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:47:44.302792   17336 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 22:47:44.303119   17336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:47:44.303157   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:47:44.303171   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:47:44.303184   17336 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 22:47:44.303248   17336 start.go:340] cluster config:
	{Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:47:44.303378   17336 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:44.305200   17336 out.go:177] * Starting "addons-224553" primary control-plane node in "addons-224553" cluster
	I0703 22:47:44.306321   17336 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:47:44.306351   17336 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 22:47:44.306360   17336 cache.go:56] Caching tarball of preloaded images
	I0703 22:47:44.306437   17336 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 22:47:44.306448   17336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 22:47:44.306780   17336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json ...
	I0703 22:47:44.306805   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json: {Name:mkffec6b993c5054368f9460bbad4774d4ef1599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:47:44.306928   17336 start.go:360] acquireMachinesLock for addons-224553: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 22:47:44.306969   17336 start.go:364] duration metric: took 29.595µs to acquireMachinesLock for "addons-224553"
	I0703 22:47:44.306985   17336 start.go:93] Provisioning new machine with config: &{Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 22:47:44.307039   17336 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 22:47:44.308656   17336 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0703 22:47:44.308808   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:47:44.308862   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:47:44.323825   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0703 22:47:44.324290   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:47:44.324903   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:47:44.324927   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:47:44.325353   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:47:44.325608   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:47:44.325809   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:47:44.326002   17336 start.go:159] libmachine.API.Create for "addons-224553" (driver="kvm2")
	I0703 22:47:44.326030   17336 client.go:168] LocalClient.Create starting
	I0703 22:47:44.326068   17336 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 22:47:44.490412   17336 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 22:47:44.658147   17336 main.go:141] libmachine: Running pre-create checks...
	I0703 22:47:44.658169   17336 main.go:141] libmachine: (addons-224553) Calling .PreCreateCheck
	I0703 22:47:44.660381   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:47:44.660915   17336 main.go:141] libmachine: Creating machine...
	I0703 22:47:44.660932   17336 main.go:141] libmachine: (addons-224553) Calling .Create
	I0703 22:47:44.661114   17336 main.go:141] libmachine: (addons-224553) Creating KVM machine...
	I0703 22:47:44.662189   17336 main.go:141] libmachine: (addons-224553) DBG | found existing default KVM network
	I0703 22:47:44.662890   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.662755   17358 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0703 22:47:44.662913   17336 main.go:141] libmachine: (addons-224553) DBG | created network xml: 
	I0703 22:47:44.662924   17336 main.go:141] libmachine: (addons-224553) DBG | <network>
	I0703 22:47:44.662933   17336 main.go:141] libmachine: (addons-224553) DBG |   <name>mk-addons-224553</name>
	I0703 22:47:44.662939   17336 main.go:141] libmachine: (addons-224553) DBG |   <dns enable='no'/>
	I0703 22:47:44.662948   17336 main.go:141] libmachine: (addons-224553) DBG |   
	I0703 22:47:44.662959   17336 main.go:141] libmachine: (addons-224553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 22:47:44.662971   17336 main.go:141] libmachine: (addons-224553) DBG |     <dhcp>
	I0703 22:47:44.662987   17336 main.go:141] libmachine: (addons-224553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 22:47:44.663001   17336 main.go:141] libmachine: (addons-224553) DBG |     </dhcp>
	I0703 22:47:44.663051   17336 main.go:141] libmachine: (addons-224553) DBG |   </ip>
	I0703 22:47:44.663079   17336 main.go:141] libmachine: (addons-224553) DBG |   
	I0703 22:47:44.663092   17336 main.go:141] libmachine: (addons-224553) DBG | </network>
	I0703 22:47:44.663105   17336 main.go:141] libmachine: (addons-224553) DBG | 
	I0703 22:47:44.668342   17336 main.go:141] libmachine: (addons-224553) DBG | trying to create private KVM network mk-addons-224553 192.168.39.0/24...
	I0703 22:47:44.734801   17336 main.go:141] libmachine: (addons-224553) DBG | private KVM network mk-addons-224553 192.168.39.0/24 created
	I0703 22:47:44.734828   17336 main.go:141] libmachine: (addons-224553) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 ...
	I0703 22:47:44.734850   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.734793   17358 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:44.734872   17336 main.go:141] libmachine: (addons-224553) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 22:47:44.734968   17336 main.go:141] libmachine: (addons-224553) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 22:47:44.968526   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:44.968399   17358 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa...
	I0703 22:47:45.084020   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:45.083868   17358 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/addons-224553.rawdisk...
	I0703 22:47:45.084052   17336 main.go:141] libmachine: (addons-224553) DBG | Writing magic tar header
	I0703 22:47:45.084095   17336 main.go:141] libmachine: (addons-224553) DBG | Writing SSH key tar header
	I0703 22:47:45.084116   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:45.083988   17358 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 ...
	I0703 22:47:45.084136   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553 (perms=drwx------)
	I0703 22:47:45.084158   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 22:47:45.084169   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 22:47:45.084182   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 22:47:45.084193   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 22:47:45.084208   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553
	I0703 22:47:45.084226   17336 main.go:141] libmachine: (addons-224553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 22:47:45.084244   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 22:47:45.084252   17336 main.go:141] libmachine: (addons-224553) Creating domain...
	I0703 22:47:45.084268   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:45.084281   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 22:47:45.084293   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 22:47:45.084303   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home/jenkins
	I0703 22:47:45.084315   17336 main.go:141] libmachine: (addons-224553) DBG | Checking permissions on dir: /home
	I0703 22:47:45.084328   17336 main.go:141] libmachine: (addons-224553) DBG | Skipping /home - not owner
	I0703 22:47:45.085333   17336 main.go:141] libmachine: (addons-224553) define libvirt domain using xml: 
	I0703 22:47:45.085355   17336 main.go:141] libmachine: (addons-224553) <domain type='kvm'>
	I0703 22:47:45.085362   17336 main.go:141] libmachine: (addons-224553)   <name>addons-224553</name>
	I0703 22:47:45.085367   17336 main.go:141] libmachine: (addons-224553)   <memory unit='MiB'>4000</memory>
	I0703 22:47:45.085372   17336 main.go:141] libmachine: (addons-224553)   <vcpu>2</vcpu>
	I0703 22:47:45.085376   17336 main.go:141] libmachine: (addons-224553)   <features>
	I0703 22:47:45.085381   17336 main.go:141] libmachine: (addons-224553)     <acpi/>
	I0703 22:47:45.085385   17336 main.go:141] libmachine: (addons-224553)     <apic/>
	I0703 22:47:45.085390   17336 main.go:141] libmachine: (addons-224553)     <pae/>
	I0703 22:47:45.085397   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085402   17336 main.go:141] libmachine: (addons-224553)   </features>
	I0703 22:47:45.085417   17336 main.go:141] libmachine: (addons-224553)   <cpu mode='host-passthrough'>
	I0703 22:47:45.085432   17336 main.go:141] libmachine: (addons-224553)   
	I0703 22:47:45.085451   17336 main.go:141] libmachine: (addons-224553)   </cpu>
	I0703 22:47:45.085459   17336 main.go:141] libmachine: (addons-224553)   <os>
	I0703 22:47:45.085464   17336 main.go:141] libmachine: (addons-224553)     <type>hvm</type>
	I0703 22:47:45.085501   17336 main.go:141] libmachine: (addons-224553)     <boot dev='cdrom'/>
	I0703 22:47:45.085517   17336 main.go:141] libmachine: (addons-224553)     <boot dev='hd'/>
	I0703 22:47:45.085528   17336 main.go:141] libmachine: (addons-224553)     <bootmenu enable='no'/>
	I0703 22:47:45.085539   17336 main.go:141] libmachine: (addons-224553)   </os>
	I0703 22:47:45.085549   17336 main.go:141] libmachine: (addons-224553)   <devices>
	I0703 22:47:45.085559   17336 main.go:141] libmachine: (addons-224553)     <disk type='file' device='cdrom'>
	I0703 22:47:45.085574   17336 main.go:141] libmachine: (addons-224553)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/boot2docker.iso'/>
	I0703 22:47:45.085594   17336 main.go:141] libmachine: (addons-224553)       <target dev='hdc' bus='scsi'/>
	I0703 22:47:45.085605   17336 main.go:141] libmachine: (addons-224553)       <readonly/>
	I0703 22:47:45.085616   17336 main.go:141] libmachine: (addons-224553)     </disk>
	I0703 22:47:45.085629   17336 main.go:141] libmachine: (addons-224553)     <disk type='file' device='disk'>
	I0703 22:47:45.085643   17336 main.go:141] libmachine: (addons-224553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 22:47:45.085660   17336 main.go:141] libmachine: (addons-224553)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/addons-224553.rawdisk'/>
	I0703 22:47:45.085671   17336 main.go:141] libmachine: (addons-224553)       <target dev='hda' bus='virtio'/>
	I0703 22:47:45.085683   17336 main.go:141] libmachine: (addons-224553)     </disk>
	I0703 22:47:45.085694   17336 main.go:141] libmachine: (addons-224553)     <interface type='network'>
	I0703 22:47:45.085708   17336 main.go:141] libmachine: (addons-224553)       <source network='mk-addons-224553'/>
	I0703 22:47:45.085723   17336 main.go:141] libmachine: (addons-224553)       <model type='virtio'/>
	I0703 22:47:45.085735   17336 main.go:141] libmachine: (addons-224553)     </interface>
	I0703 22:47:45.085749   17336 main.go:141] libmachine: (addons-224553)     <interface type='network'>
	I0703 22:47:45.085762   17336 main.go:141] libmachine: (addons-224553)       <source network='default'/>
	I0703 22:47:45.085774   17336 main.go:141] libmachine: (addons-224553)       <model type='virtio'/>
	I0703 22:47:45.085802   17336 main.go:141] libmachine: (addons-224553)     </interface>
	I0703 22:47:45.085822   17336 main.go:141] libmachine: (addons-224553)     <serial type='pty'>
	I0703 22:47:45.085829   17336 main.go:141] libmachine: (addons-224553)       <target port='0'/>
	I0703 22:47:45.085837   17336 main.go:141] libmachine: (addons-224553)     </serial>
	I0703 22:47:45.085845   17336 main.go:141] libmachine: (addons-224553)     <console type='pty'>
	I0703 22:47:45.085858   17336 main.go:141] libmachine: (addons-224553)       <target type='serial' port='0'/>
	I0703 22:47:45.085866   17336 main.go:141] libmachine: (addons-224553)     </console>
	I0703 22:47:45.085871   17336 main.go:141] libmachine: (addons-224553)     <rng model='virtio'>
	I0703 22:47:45.085880   17336 main.go:141] libmachine: (addons-224553)       <backend model='random'>/dev/random</backend>
	I0703 22:47:45.085887   17336 main.go:141] libmachine: (addons-224553)     </rng>
	I0703 22:47:45.085892   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085898   17336 main.go:141] libmachine: (addons-224553)     
	I0703 22:47:45.085903   17336 main.go:141] libmachine: (addons-224553)   </devices>
	I0703 22:47:45.085909   17336 main.go:141] libmachine: (addons-224553) </domain>
	I0703 22:47:45.085917   17336 main.go:141] libmachine: (addons-224553) 
	I0703 22:47:45.091970   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:76:64:42 in network default
	I0703 22:47:45.092495   17336 main.go:141] libmachine: (addons-224553) Ensuring networks are active...
	I0703 22:47:45.092511   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:45.093263   17336 main.go:141] libmachine: (addons-224553) Ensuring network default is active
	I0703 22:47:45.093560   17336 main.go:141] libmachine: (addons-224553) Ensuring network mk-addons-224553 is active
	I0703 22:47:45.094003   17336 main.go:141] libmachine: (addons-224553) Getting domain xml...
	I0703 22:47:45.094686   17336 main.go:141] libmachine: (addons-224553) Creating domain...
	I0703 22:47:46.479393   17336 main.go:141] libmachine: (addons-224553) Waiting to get IP...
	I0703 22:47:46.480260   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:46.480769   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:46.480800   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:46.480681   17358 retry.go:31] will retry after 205.766911ms: waiting for machine to come up
	I0703 22:47:46.688327   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:46.688780   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:46.688802   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:46.688738   17358 retry.go:31] will retry after 315.450273ms: waiting for machine to come up
	I0703 22:47:47.006469   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.006855   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.006889   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.006809   17358 retry.go:31] will retry after 409.3055ms: waiting for machine to come up
	I0703 22:47:47.417165   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.417574   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.417603   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.417525   17358 retry.go:31] will retry after 508.405078ms: waiting for machine to come up
	I0703 22:47:47.927118   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:47.927513   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:47.927548   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:47.927477   17358 retry.go:31] will retry after 608.324614ms: waiting for machine to come up
	I0703 22:47:48.537296   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:48.537727   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:48.537749   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:48.537698   17358 retry.go:31] will retry after 719.08655ms: waiting for machine to come up
	I0703 22:47:49.258560   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:49.259075   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:49.259098   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:49.259040   17358 retry.go:31] will retry after 983.818223ms: waiting for machine to come up
	I0703 22:47:50.244600   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:50.244993   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:50.245017   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:50.244936   17358 retry.go:31] will retry after 1.342762679s: waiting for machine to come up
	I0703 22:47:51.589590   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:51.590049   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:51.590077   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:51.589994   17358 retry.go:31] will retry after 1.251250163s: waiting for machine to come up
	I0703 22:47:52.842419   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:52.842746   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:52.842771   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:52.842707   17358 retry.go:31] will retry after 1.810121664s: waiting for machine to come up
	I0703 22:47:54.654863   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:54.655376   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:54.655403   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:54.655343   17358 retry.go:31] will retry after 2.106483987s: waiting for machine to come up
	I0703 22:47:56.765766   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:56.766201   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:56.766230   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:56.766170   17358 retry.go:31] will retry after 2.398145191s: waiting for machine to come up
	I0703 22:47:59.167619   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:47:59.168038   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:47:59.168129   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:47:59.168021   17358 retry.go:31] will retry after 3.976178413s: waiting for machine to come up
	I0703 22:48:03.148808   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:03.149310   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find current IP address of domain addons-224553 in network mk-addons-224553
	I0703 22:48:03.149344   17336 main.go:141] libmachine: (addons-224553) DBG | I0703 22:48:03.149245   17358 retry.go:31] will retry after 3.742210847s: waiting for machine to come up
	I0703 22:48:06.894985   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.895436   17336 main.go:141] libmachine: (addons-224553) Found IP for machine: 192.168.39.226
	I0703 22:48:06.895462   17336 main.go:141] libmachine: (addons-224553) Reserving static IP address...
	I0703 22:48:06.895479   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has current primary IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.895908   17336 main.go:141] libmachine: (addons-224553) DBG | unable to find host DHCP lease matching {name: "addons-224553", mac: "52:54:00:d2:17:a3", ip: "192.168.39.226"} in network mk-addons-224553
	I0703 22:48:06.974820   17336 main.go:141] libmachine: (addons-224553) DBG | Getting to WaitForSSH function...
	I0703 22:48:06.974847   17336 main.go:141] libmachine: (addons-224553) Reserved static IP address: 192.168.39.226
	I0703 22:48:06.974860   17336 main.go:141] libmachine: (addons-224553) Waiting for SSH to be available...
	I0703 22:48:06.977405   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.977734   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:06.977782   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:06.977836   17336 main.go:141] libmachine: (addons-224553) DBG | Using SSH client type: external
	I0703 22:48:06.977865   17336 main.go:141] libmachine: (addons-224553) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa (-rw-------)
	I0703 22:48:06.977914   17336 main.go:141] libmachine: (addons-224553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 22:48:06.977934   17336 main.go:141] libmachine: (addons-224553) DBG | About to run SSH command:
	I0703 22:48:06.977947   17336 main.go:141] libmachine: (addons-224553) DBG | exit 0
	I0703 22:48:07.116257   17336 main.go:141] libmachine: (addons-224553) DBG | SSH cmd err, output: <nil>: 
	I0703 22:48:07.116532   17336 main.go:141] libmachine: (addons-224553) KVM machine creation complete!
	I0703 22:48:07.116801   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:48:07.117289   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:07.117501   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:07.117670   17336 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 22:48:07.117682   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:07.118847   17336 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 22:48:07.118860   17336 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 22:48:07.118865   17336 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 22:48:07.118870   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.121123   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.121520   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.121562   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.121694   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.121895   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.122050   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.122183   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.122348   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.122595   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.122608   17336 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 22:48:07.235346   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:48:07.235373   17336 main.go:141] libmachine: Detecting the provisioner...
	I0703 22:48:07.235385   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.238253   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.238712   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.238735   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.238940   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.239141   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.239323   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.239497   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.239679   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.239901   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.239915   17336 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 22:48:07.352770   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 22:48:07.352860   17336 main.go:141] libmachine: found compatible host: buildroot
	I0703 22:48:07.352872   17336 main.go:141] libmachine: Provisioning with buildroot...
	I0703 22:48:07.352883   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.353161   17336 buildroot.go:166] provisioning hostname "addons-224553"
	I0703 22:48:07.353189   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.353396   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.356110   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.356467   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.356503   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.356561   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.356745   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.356882   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.357042   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.357276   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.357475   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.357488   17336 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-224553 && echo "addons-224553" | sudo tee /etc/hostname
	I0703 22:48:07.483466   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-224553
	
	I0703 22:48:07.483502   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.486162   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.486530   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.486556   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.486720   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.486912   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.487064   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.487152   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.487265   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:07.487421   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:07.487436   17336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-224553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-224553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-224553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 22:48:07.609924   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:48:07.609956   17336 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 22:48:07.610012   17336 buildroot.go:174] setting up certificates
	I0703 22:48:07.610034   17336 provision.go:84] configureAuth start
	I0703 22:48:07.610052   17336 main.go:141] libmachine: (addons-224553) Calling .GetMachineName
	I0703 22:48:07.610376   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:07.613087   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.613410   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.613439   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.613616   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.615445   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.615817   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.615838   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.616028   17336 provision.go:143] copyHostCerts
	I0703 22:48:07.616094   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 22:48:07.616206   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 22:48:07.616268   17336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 22:48:07.616313   17336 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.addons-224553 san=[127.0.0.1 192.168.39.226 addons-224553 localhost minikube]
	I0703 22:48:07.900637   17336 provision.go:177] copyRemoteCerts
	I0703 22:48:07.900692   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 22:48:07.900712   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:07.903599   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.903948   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:07.903979   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:07.904116   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:07.904332   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:07.904497   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:07.904649   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:07.990895   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 22:48:08.015932   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 22:48:08.041859   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 22:48:08.066848   17336 provision.go:87] duration metric: took 456.795917ms to configureAuth
	I0703 22:48:08.066878   17336 buildroot.go:189] setting minikube options for container-runtime
	I0703 22:48:08.067066   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:08.067155   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.069855   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.070188   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.070221   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.070344   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.070539   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.070692   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.070828   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.070960   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:08.071116   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:08.071129   17336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 22:48:08.349251   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 22:48:08.349285   17336 main.go:141] libmachine: Checking connection to Docker...
	I0703 22:48:08.349293   17336 main.go:141] libmachine: (addons-224553) Calling .GetURL
	I0703 22:48:08.350789   17336 main.go:141] libmachine: (addons-224553) DBG | Using libvirt version 6000000
	I0703 22:48:08.352930   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.353254   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.353283   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.353481   17336 main.go:141] libmachine: Docker is up and running!
	I0703 22:48:08.353502   17336 main.go:141] libmachine: Reticulating splines...
	I0703 22:48:08.353510   17336 client.go:171] duration metric: took 24.027472431s to LocalClient.Create
	I0703 22:48:08.353533   17336 start.go:167] duration metric: took 24.027532716s to libmachine.API.Create "addons-224553"
	I0703 22:48:08.353550   17336 start.go:293] postStartSetup for "addons-224553" (driver="kvm2")
	I0703 22:48:08.353559   17336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 22:48:08.353576   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.353809   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 22:48:08.353835   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.356217   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.356541   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.356568   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.356734   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.356906   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.357062   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.357213   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.443397   17336 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 22:48:08.447973   17336 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 22:48:08.448006   17336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 22:48:08.448101   17336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 22:48:08.448132   17336 start.go:296] duration metric: took 94.575779ms for postStartSetup
	I0703 22:48:08.448168   17336 main.go:141] libmachine: (addons-224553) Calling .GetConfigRaw
	I0703 22:48:08.448683   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:08.451317   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.451627   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.451653   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.451920   17336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/config.json ...
	I0703 22:48:08.452148   17336 start.go:128] duration metric: took 24.145099273s to createHost
	I0703 22:48:08.452171   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.454411   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.454673   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.454706   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.454864   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.455014   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.455304   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.455504   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.455654   17336 main.go:141] libmachine: Using SSH client type: native
	I0703 22:48:08.455825   17336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0703 22:48:08.455838   17336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 22:48:08.569131   17336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720046888.544149608
	
	I0703 22:48:08.569156   17336 fix.go:216] guest clock: 1720046888.544149608
	I0703 22:48:08.569164   17336 fix.go:229] Guest: 2024-07-03 22:48:08.544149608 +0000 UTC Remote: 2024-07-03 22:48:08.452160548 +0000 UTC m=+24.245255484 (delta=91.98906ms)
	I0703 22:48:08.569200   17336 fix.go:200] guest clock delta is within tolerance: 91.98906ms
	I0703 22:48:08.569205   17336 start.go:83] releasing machines lock for "addons-224553", held for 24.262227321s
	I0703 22:48:08.569237   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.569551   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:08.572518   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.572882   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.572902   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.573136   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573607   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573770   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:08.573874   17336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 22:48:08.573916   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.574187   17336 ssh_runner.go:195] Run: cat /version.json
	I0703 22:48:08.574210   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:08.576446   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.576756   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.576785   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.576807   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.577052   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.577244   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.577294   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:08.577319   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:08.577432   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.577580   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.577597   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:08.577717   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:08.577832   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:08.577943   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:08.657409   17336 ssh_runner.go:195] Run: systemctl --version
	I0703 22:48:08.688879   17336 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 22:48:08.848798   17336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 22:48:08.855035   17336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 22:48:08.855097   17336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 22:48:08.872890   17336 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 22:48:08.872916   17336 start.go:494] detecting cgroup driver to use...
	I0703 22:48:08.872990   17336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 22:48:08.890622   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 22:48:08.905426   17336 docker.go:217] disabling cri-docker service (if available) ...
	I0703 22:48:08.905498   17336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 22:48:08.920142   17336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 22:48:08.934911   17336 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 22:48:09.058952   17336 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 22:48:09.205853   17336 docker.go:233] disabling docker service ...
	I0703 22:48:09.205984   17336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 22:48:09.221246   17336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 22:48:09.235716   17336 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 22:48:09.377464   17336 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 22:48:09.506073   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 22:48:09.520805   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 22:48:09.540410   17336 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 22:48:09.540481   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.551473   17336 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 22:48:09.551536   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.562641   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.574009   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.584830   17336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 22:48:09.596084   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.606890   17336 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.625183   17336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 22:48:09.635698   17336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 22:48:09.645165   17336 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 22:48:09.645223   17336 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 22:48:09.657629   17336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 22:48:09.668281   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:09.781027   17336 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 22:48:09.919472   17336 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 22:48:09.919562   17336 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 22:48:09.924453   17336 start.go:562] Will wait 60s for crictl version
	I0703 22:48:09.924519   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:48:09.928576   17336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 22:48:09.977626   17336 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 22:48:09.977751   17336 ssh_runner.go:195] Run: crio --version
	I0703 22:48:10.011910   17336 ssh_runner.go:195] Run: crio --version
	I0703 22:48:10.047630   17336 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 22:48:10.048920   17336 main.go:141] libmachine: (addons-224553) Calling .GetIP
	I0703 22:48:10.051376   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:10.051751   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:10.051775   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:10.052026   17336 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 22:48:10.056721   17336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 22:48:10.070759   17336 kubeadm.go:877] updating cluster {Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 22:48:10.070895   17336 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:48:10.070945   17336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 22:48:10.106524   17336 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 22:48:10.106593   17336 ssh_runner.go:195] Run: which lz4
	I0703 22:48:10.110836   17336 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 22:48:10.115233   17336 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 22:48:10.115274   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 22:48:11.509161   17336 crio.go:462] duration metric: took 1.398368216s to copy over tarball
	I0703 22:48:11.509254   17336 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 22:48:13.866860   17336 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.357576208s)
	I0703 22:48:13.866893   17336 crio.go:469] duration metric: took 2.35770323s to extract the tarball
	I0703 22:48:13.866902   17336 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 22:48:13.904432   17336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 22:48:13.946723   17336 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 22:48:13.946742   17336 cache_images.go:84] Images are preloaded, skipping loading
	I0703 22:48:13.946751   17336 kubeadm.go:928] updating node { 192.168.39.226 8443 v1.30.2 crio true true} ...
	I0703 22:48:13.946874   17336 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-224553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 22:48:13.946936   17336 ssh_runner.go:195] Run: crio config
	I0703 22:48:13.992450   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:48:13.992468   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:48:13.992477   17336 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 22:48:13.992497   17336 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-224553 NodeName:addons-224553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 22:48:13.993083   17336 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-224553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 22:48:13.993138   17336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 22:48:14.003290   17336 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 22:48:14.003358   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 22:48:14.013110   17336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 22:48:14.030435   17336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 22:48:14.047726   17336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0703 22:48:14.064914   17336 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0703 22:48:14.069291   17336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 22:48:14.082770   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:14.203545   17336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:48:14.221435   17336 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553 for IP: 192.168.39.226
	I0703 22:48:14.221461   17336 certs.go:194] generating shared ca certs ...
	I0703 22:48:14.221478   17336 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.221620   17336 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 22:48:14.367891   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt ...
	I0703 22:48:14.367920   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt: {Name:mk44cd94bcae977347c648f7581bc4eb639e6e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.368172   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key ...
	I0703 22:48:14.368204   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key: {Name:mk588f0e29902079d3d139aaf98632aab9ca8ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.368322   17336 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 22:48:14.430890   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt ...
	I0703 22:48:14.430920   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt: {Name:mk7c70f9ef666e5494d5b280d30b8c3aa9020f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.431089   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key ...
	I0703 22:48:14.431102   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key: {Name:mk5eedc42a2d3889f265a1577b7d508df68e95e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.431200   17336 certs.go:256] generating profile certs ...
	I0703 22:48:14.431254   17336 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key
	I0703 22:48:14.431269   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt with IP's: []
	I0703 22:48:14.658106   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt ...
	I0703 22:48:14.658138   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: {Name:mk760625e26a9f70ddadef95c5849449332ef189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.658322   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key ...
	I0703 22:48:14.658336   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.key: {Name:mk687ec39961b5eecf09a22b78c6b0a026328208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.658441   17336 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785
	I0703 22:48:14.658463   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.226]
	I0703 22:48:14.748918   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 ...
	I0703 22:48:14.748950   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785: {Name:mkb23c6c675fae3f20c7c032aeade4dff7e80d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.749115   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785 ...
	I0703 22:48:14.749130   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785: {Name:mk9f6660627d2c7616e692c4373a94c3a5262e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.749216   17336 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt.074e4785 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt
	I0703 22:48:14.749293   17336 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key.074e4785 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key
	I0703 22:48:14.749346   17336 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key
	I0703 22:48:14.749389   17336 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt with IP's: []
	I0703 22:48:14.872491   17336 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt ...
	I0703 22:48:14.872519   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt: {Name:mk6bef5e0380657d2d4b606024ec41f1a0380b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.872672   17336 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key ...
	I0703 22:48:14.872682   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key: {Name:mk21e3c14b9e64aa3d0c956246001f57519c26ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:14.872838   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 22:48:14.872871   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 22:48:14.872894   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 22:48:14.872916   17336 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 22:48:14.873490   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 22:48:14.911108   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 22:48:14.943323   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 22:48:14.976483   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 22:48:15.004167   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0703 22:48:15.030815   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 22:48:15.058735   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 22:48:15.085824   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 22:48:15.112839   17336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 22:48:15.139748   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 22:48:15.158980   17336 ssh_runner.go:195] Run: openssl version
	I0703 22:48:15.165615   17336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 22:48:15.178437   17336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.184027   17336 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.184096   17336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:48:15.190536   17336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 22:48:15.202900   17336 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 22:48:15.207605   17336 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 22:48:15.207663   17336 kubeadm.go:391] StartCluster: {Name:addons-224553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-224553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:48:15.207748   17336 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 22:48:15.207812   17336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 22:48:15.247710   17336 cri.go:89] found id: ""
	I0703 22:48:15.247784   17336 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 22:48:15.258745   17336 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 22:48:15.270139   17336 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 22:48:15.281463   17336 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 22:48:15.281487   17336 kubeadm.go:156] found existing configuration files:
	
	I0703 22:48:15.281542   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 22:48:15.291568   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 22:48:15.291636   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 22:48:15.302447   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 22:48:15.312964   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 22:48:15.313015   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 22:48:15.323858   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 22:48:15.334567   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 22:48:15.334623   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 22:48:15.345341   17336 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 22:48:15.355789   17336 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 22:48:15.355848   17336 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 22:48:15.366807   17336 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 22:48:15.569319   17336 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 22:48:25.849286   17336 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 22:48:25.849351   17336 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 22:48:25.849437   17336 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 22:48:25.849588   17336 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 22:48:25.849765   17336 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 22:48:25.849876   17336 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 22:48:25.851491   17336 out.go:204]   - Generating certificates and keys ...
	I0703 22:48:25.851590   17336 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 22:48:25.851676   17336 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 22:48:25.851774   17336 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 22:48:25.851854   17336 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 22:48:25.851940   17336 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 22:48:25.852015   17336 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 22:48:25.852092   17336 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 22:48:25.852235   17336 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-224553 localhost] and IPs [192.168.39.226 127.0.0.1 ::1]
	I0703 22:48:25.852294   17336 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 22:48:25.852427   17336 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-224553 localhost] and IPs [192.168.39.226 127.0.0.1 ::1]
	I0703 22:48:25.852485   17336 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 22:48:25.852561   17336 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 22:48:25.852602   17336 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 22:48:25.852654   17336 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 22:48:25.852705   17336 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 22:48:25.852757   17336 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 22:48:25.852803   17336 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 22:48:25.852904   17336 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 22:48:25.852963   17336 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 22:48:25.853038   17336 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 22:48:25.853114   17336 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 22:48:25.854692   17336 out.go:204]   - Booting up control plane ...
	I0703 22:48:25.854805   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 22:48:25.854910   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 22:48:25.854984   17336 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 22:48:25.855073   17336 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 22:48:25.855156   17336 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 22:48:25.855225   17336 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 22:48:25.855388   17336 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 22:48:25.855510   17336 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 22:48:25.855602   17336 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.740984ms
	I0703 22:48:25.855697   17336 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 22:48:25.855780   17336 kubeadm.go:309] [api-check] The API server is healthy after 5.502236524s
	I0703 22:48:25.855953   17336 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 22:48:25.856086   17336 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 22:48:25.856138   17336 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 22:48:25.856342   17336 kubeadm.go:309] [mark-control-plane] Marking the node addons-224553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 22:48:25.856426   17336 kubeadm.go:309] [bootstrap-token] Using token: x971cj.9c58key722wzoeyj
	I0703 22:48:25.857924   17336 out.go:204]   - Configuring RBAC rules ...
	I0703 22:48:25.858019   17336 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 22:48:25.858089   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 22:48:25.858215   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 22:48:25.858336   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 22:48:25.858444   17336 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 22:48:25.858525   17336 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 22:48:25.858642   17336 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 22:48:25.858701   17336 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 22:48:25.858745   17336 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 22:48:25.858751   17336 kubeadm.go:309] 
	I0703 22:48:25.858800   17336 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 22:48:25.858805   17336 kubeadm.go:309] 
	I0703 22:48:25.858877   17336 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 22:48:25.858885   17336 kubeadm.go:309] 
	I0703 22:48:25.858921   17336 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 22:48:25.858992   17336 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 22:48:25.859065   17336 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 22:48:25.859074   17336 kubeadm.go:309] 
	I0703 22:48:25.859125   17336 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 22:48:25.859131   17336 kubeadm.go:309] 
	I0703 22:48:25.859171   17336 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 22:48:25.859177   17336 kubeadm.go:309] 
	I0703 22:48:25.859236   17336 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 22:48:25.859311   17336 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 22:48:25.859383   17336 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 22:48:25.859391   17336 kubeadm.go:309] 
	I0703 22:48:25.859474   17336 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 22:48:25.859548   17336 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 22:48:25.859555   17336 kubeadm.go:309] 
	I0703 22:48:25.859631   17336 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x971cj.9c58key722wzoeyj \
	I0703 22:48:25.859725   17336 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 22:48:25.859745   17336 kubeadm.go:309] 	--control-plane 
	I0703 22:48:25.859749   17336 kubeadm.go:309] 
	I0703 22:48:25.859817   17336 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 22:48:25.859823   17336 kubeadm.go:309] 
	I0703 22:48:25.859906   17336 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x971cj.9c58key722wzoeyj \
	I0703 22:48:25.860008   17336 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 22:48:25.860019   17336 cni.go:84] Creating CNI manager for ""
	I0703 22:48:25.860025   17336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:48:25.861503   17336 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0703 22:48:25.862807   17336 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0703 22:48:25.874755   17336 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0703 22:48:25.898038   17336 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 22:48:25.898167   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:25.898179   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-224553 minikube.k8s.io/updated_at=2024_07_03T22_48_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=addons-224553 minikube.k8s.io/primary=true
	I0703 22:48:25.933488   17336 ops.go:34] apiserver oom_adj: -16
	I0703 22:48:26.048446   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:26.549223   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:27.049228   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:27.548887   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:28.048734   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:28.548593   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:29.048734   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:29.548782   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:30.048764   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:30.549072   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:31.048936   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:31.548495   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:32.048493   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:32.549477   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:33.049265   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:33.549489   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:34.049091   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:34.549080   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:35.048475   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:35.549107   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:36.049204   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:36.549330   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:37.049178   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:37.548838   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:38.049511   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:38.548935   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.049031   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.548814   17336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 22:48:39.643398   17336 kubeadm.go:1107] duration metric: took 13.745302371s to wait for elevateKubeSystemPrivileges
	W0703 22:48:39.643441   17336 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0703 22:48:39.643449   17336 kubeadm.go:393] duration metric: took 24.435790735s to StartCluster
	I0703 22:48:39.643465   17336 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:39.643594   17336 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:48:39.643972   17336 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:48:39.644169   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0703 22:48:39.644183   17336 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 22:48:39.644278   17336 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0703 22:48:39.644352   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:39.644386   17336 addons.go:69] Setting yakd=true in profile "addons-224553"
	I0703 22:48:39.644397   17336 addons.go:69] Setting metrics-server=true in profile "addons-224553"
	I0703 22:48:39.644413   17336 addons.go:69] Setting gcp-auth=true in profile "addons-224553"
	I0703 22:48:39.644427   17336 addons.go:234] Setting addon yakd=true in "addons-224553"
	I0703 22:48:39.644420   17336 addons.go:69] Setting inspektor-gadget=true in profile "addons-224553"
	I0703 22:48:39.644442   17336 addons.go:69] Setting volcano=true in profile "addons-224553"
	I0703 22:48:39.644448   17336 mustload.go:65] Loading cluster: addons-224553
	I0703 22:48:39.644460   17336 addons.go:234] Setting addon volcano=true in "addons-224553"
	I0703 22:48:39.644468   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644470   17336 addons.go:69] Setting helm-tiller=true in profile "addons-224553"
	I0703 22:48:39.644510   17336 addons.go:69] Setting ingress-dns=true in profile "addons-224553"
	I0703 22:48:39.644538   17336 addons.go:234] Setting addon ingress-dns=true in "addons-224553"
	I0703 22:48:39.644551   17336 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-224553"
	I0703 22:48:39.644568   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644596   17336 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-224553"
	I0703 22:48:39.644593   17336 addons.go:69] Setting cloud-spanner=true in profile "addons-224553"
	I0703 22:48:39.644620   17336 addons.go:234] Setting addon cloud-spanner=true in "addons-224553"
	I0703 22:48:39.644628   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644646   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644656   17336 config.go:182] Loaded profile config "addons-224553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 22:48:39.644904   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.644935   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644964   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.644986   17336 addons.go:69] Setting default-storageclass=true in profile "addons-224553"
	I0703 22:48:39.645033   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644433   17336 addons.go:69] Setting registry=true in profile "addons-224553"
	I0703 22:48:39.645087   17336 addons.go:234] Setting addon registry=true in "addons-224553"
	I0703 22:48:39.645034   17336 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-224553"
	I0703 22:48:39.644429   17336 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-224553"
	I0703 22:48:39.645114   17336 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-224553"
	I0703 22:48:39.645137   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644424   17336 addons.go:234] Setting addon metrics-server=true in "addons-224553"
	I0703 22:48:39.645227   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644973   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645337   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644541   17336 addons.go:234] Setting addon helm-tiller=true in "addons-224553"
	I0703 22:48:39.644491   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.645400   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.645445   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645465   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.645610   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645636   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.645759   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.645780   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644462   17336 addons.go:234] Setting addon inspektor-gadget=true in "addons-224553"
	I0703 22:48:39.644496   17336 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-224553"
	I0703 22:48:39.645904   17336 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-224553"
	I0703 22:48:39.644503   17336 addons.go:69] Setting ingress=true in profile "addons-224553"
	I0703 22:48:39.645998   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646011   17336 addons.go:234] Setting addon ingress=true in "addons-224553"
	I0703 22:48:39.646021   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646047   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.646078   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646094   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646150   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.644978   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646251   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644392   17336 addons.go:69] Setting storage-provisioner=true in profile "addons-224553"
	I0703 22:48:39.646311   17336 addons.go:234] Setting addon storage-provisioner=true in "addons-224553"
	I0703 22:48:39.646370   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646388   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644985   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646452   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646460   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646475   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.646496   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.646520   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.644990   17336 addons.go:69] Setting volumesnapshots=true in profile "addons-224553"
	I0703 22:48:39.646586   17336 addons.go:234] Setting addon volumesnapshots=true in "addons-224553"
	I0703 22:48:39.646696   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.647021   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.647043   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.647045   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.647022   17336 out.go:177] * Verifying Kubernetes components...
	I0703 22:48:39.647446   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.647475   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.647661   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.648367   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.648399   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.660051   17336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:48:39.667037   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44937
	I0703 22:48:39.667489   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.668020   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.668048   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.668432   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.668963   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.668999   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.670986   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0703 22:48:39.671433   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.671916   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.671932   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.672318   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.672869   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.672904   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.673919   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0703 22:48:39.674346   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.674424   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0703 22:48:39.674704   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.675129   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.675145   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.675249   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.675258   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.675559   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.675676   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.675720   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.676608   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.676643   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.682597   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0703 22:48:39.682795   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0703 22:48:39.684950   17336 addons.go:234] Setting addon default-storageclass=true in "addons-224553"
	I0703 22:48:39.684994   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.685380   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.685415   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.686314   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.686893   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.687605   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.687615   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.687624   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.687632   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.687996   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.688592   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.688631   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.690232   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33419
	I0703 22:48:39.690272   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.691079   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.691119   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.691517   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.692093   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.692114   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.692987   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.693433   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.693472   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.698244   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0703 22:48:39.698327   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0703 22:48:39.698260   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0703 22:48:39.698869   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.698990   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.699435   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.699461   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.699762   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.699826   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.699846   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.700278   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.700314   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.700400   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.700832   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.700860   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.700916   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.701450   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.704195   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.708556   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.709502   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.709530   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.714514   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0703 22:48:39.716244   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.716777   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.716797   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.717168   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.722969   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0703 22:48:39.724242   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0703 22:48:39.724790   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.724826   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.725140   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.725355   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46019
	I0703 22:48:39.725889   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.725960   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.725981   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.726310   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.726410   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0703 22:48:39.726555   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.726805   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.726828   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.726884   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.727228   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.727357   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.727371   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.727419   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.727651   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.727768   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.727814   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.727947   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.728542   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.728562   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.729055   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.729774   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.729943   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.730457   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.732691   17336 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 22:48:39.732739   17336 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0703 22:48:39.733074   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0703 22:48:39.733343   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.734412   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.734698   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0703 22:48:39.734742   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0703 22:48:39.734774   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.734750   17336 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:48:39.734853   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 22:48:39.734877   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.734925   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.734946   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.735288   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.735361   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0703 22:48:39.735522   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.737841   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0703 22:48:39.737985   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0703 22:48:39.738252   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.738854   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.738870   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.739242   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.739288   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.739482   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.739826   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.739847   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.739982   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.740125   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0703 22:48:39.740339   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.740525   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.740690   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.740731   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.740972   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.741070   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.741173   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0703 22:48:39.741253   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.741294   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.741508   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.741820   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.742009   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.742271   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0703 22:48:39.742707   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.743793   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:39.744500   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0703 22:48:39.744519   17336 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0703 22:48:39.745643   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0703 22:48:39.746057   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0703 22:48:39.746076   17336 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0703 22:48:39.746095   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.746154   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:39.746850   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.747277   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0703 22:48:39.747608   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0703 22:48:39.747623   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.748108   17336 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0703 22:48:39.748124   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0703 22:48:39.748139   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.748144   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.748164   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.749106   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.749124   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.749511   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.750016   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.750329   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0703 22:48:39.750625   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.750656   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.750881   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.751436   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.751476   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.751491   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.751762   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.751958   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.752143   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.752330   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.752576   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0703 22:48:39.752884   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.752905   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0703 22:48:39.753529   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.753560   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.753562   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.753696   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0703 22:48:39.753713   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0703 22:48:39.753730   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.753810   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.754089   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.754112   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.754169   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.754296   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.755040   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.756245   17336 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-224553"
	I0703 22:48:39.756280   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.756634   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.756667   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.756904   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.757372   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.757470   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.757801   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.757833   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.757979   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.758127   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.758254   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.758386   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.759298   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.761022   17336 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0703 22:48:39.762114   17336 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0703 22:48:39.762133   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0703 22:48:39.762152   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.764710   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I0703 22:48:39.765406   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.765452   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43841
	I0703 22:48:39.766205   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.766233   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.766355   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.766383   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.766385   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.766476   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.766631   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.766927   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.767054   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.767084   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.767496   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.767723   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.767811   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.767831   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.767976   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0703 22:48:39.768725   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.768905   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.769114   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.769451   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0703 22:48:39.769601   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.770346   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.770886   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.770906   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.771269   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.771838   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.771893   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.771871   17336 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0703 22:48:39.772076   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44975
	I0703 22:48:39.772380   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.772403   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.772869   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.773098   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.773245   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.773366   17336 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0703 22:48:39.773380   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0703 22:48:39.773397   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.773581   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.773593   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.774022   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.774263   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.774220   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.774223   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0703 22:48:39.774478   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.774929   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.775214   17336 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0703 22:48:39.775477   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.775497   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.775834   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.776030   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.776380   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.776628   17336 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0703 22:48:39.776730   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0703 22:48:39.776755   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.777994   17336 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0703 22:48:39.778501   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.778600   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.778786   17336 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 22:48:39.778800   17336 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 22:48:39.778825   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.779101   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.779125   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.779278   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0703 22:48:39.779293   17336 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0703 22:48:39.779319   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.779569   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.779757   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.779909   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.780254   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.782486   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.782910   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.782928   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.782962   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.783145   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.783349   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.783522   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.783547   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.783551   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.783703   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.783760   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.784002   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.784043   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.784173   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.784195   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.784239   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.784388   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.784623   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.784802   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.784959   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.785108   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.788173   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0703 22:48:39.788722   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.789271   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.789295   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.789696   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.790050   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0703 22:48:39.790325   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.790370   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.790461   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.790951   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.790976   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.791329   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.791592   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.793317   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:39.793694   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:39.793732   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:39.799382   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38129
	I0703 22:48:39.799845   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0703 22:48:39.799907   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.800415   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.800435   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.800459   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.800846   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.800863   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.800918   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.801174   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.801239   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.801337   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.803307   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.803589   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.805456   17336 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0703 22:48:39.805468   17336 out.go:177]   - Using image docker.io/registry:2.8.3
	I0703 22:48:39.806684   17336 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0703 22:48:39.806703   17336 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0703 22:48:39.806726   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.807941   17336 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0703 22:48:39.809185   17336 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0703 22:48:39.809203   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0703 22:48:39.809224   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.810211   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0703 22:48:39.810378   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.810587   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.810650   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.810680   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.810816   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.810986   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.811101   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.811120   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.811203   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.811246   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0703 22:48:39.811386   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.811441   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.811507   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.811814   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.811999   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.812022   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.812992   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.813152   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.813238   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.813429   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.813935   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.813971   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.814587   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.814798   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.814845   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.814953   17336 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0703 22:48:39.814973   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.815016   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:39.815029   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:39.815143   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.815173   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:39.815151   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:39.815188   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:39.815195   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:39.815201   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:39.815368   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:39.815382   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	W0703 22:48:39.815450   17336 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0703 22:48:39.817615   17336 out.go:177]   - Using image docker.io/busybox:stable
	I0703 22:48:39.817963   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0703 22:48:39.818263   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.818722   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.818748   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.818842   17336 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0703 22:48:39.818857   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0703 22:48:39.818869   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:39.819026   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.819179   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:39.821216   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:39.822559   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.822807   17336 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0703 22:48:39.822992   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.823011   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.823182   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.823386   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.823558   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.823666   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.823864   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0703 22:48:39.823909   17336 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0703 22:48:39.823926   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	W0703 22:48:39.825879   17336 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42352->192.168.39.226:22: read: connection reset by peer
	I0703 22:48:39.825904   17336 retry.go:31] will retry after 287.956133ms: ssh: handshake failed: read tcp 192.168.39.1:42352->192.168.39.226:22: read: connection reset by peer
	I0703 22:48:39.826655   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.827146   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:39.827165   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:39.827369   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:39.827564   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:39.827672   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:39.827764   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:39.829595   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0703 22:48:39.829917   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:39.830431   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:39.830446   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:39.830705   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:39.830926   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:40.079721   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 22:48:40.134233   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0703 22:48:40.181206   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:48:40.182841   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0703 22:48:40.204402   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0703 22:48:40.204423   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0703 22:48:40.256900   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0703 22:48:40.266734   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0703 22:48:40.266755   17336 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0703 22:48:40.282768   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0703 22:48:40.282796   17336 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0703 22:48:40.294893   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0703 22:48:40.294914   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0703 22:48:40.327400   17336 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0703 22:48:40.327433   17336 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0703 22:48:40.357620   17336 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0703 22:48:40.357643   17336 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0703 22:48:40.411966   17336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:48:40.412017   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0703 22:48:40.421659   17336 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0703 22:48:40.421681   17336 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0703 22:48:40.447851   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0703 22:48:40.447895   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0703 22:48:40.516621   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0703 22:48:40.516648   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0703 22:48:40.519358   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0703 22:48:40.530210   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0703 22:48:40.530233   17336 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0703 22:48:40.559518   17336 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0703 22:48:40.559548   17336 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0703 22:48:40.623727   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0703 22:48:40.623754   17336 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0703 22:48:40.688492   17336 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0703 22:48:40.688523   17336 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0703 22:48:40.743066   17336 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0703 22:48:40.743093   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0703 22:48:40.749857   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0703 22:48:40.775816   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0703 22:48:40.878348   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0703 22:48:40.878377   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0703 22:48:40.904986   17336 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0703 22:48:40.905005   17336 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0703 22:48:40.986224   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0703 22:48:40.986250   17336 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0703 22:48:40.988423   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0703 22:48:40.988440   17336 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0703 22:48:41.025646   17336 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0703 22:48:41.025662   17336 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0703 22:48:41.081952   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0703 22:48:41.202145   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0703 22:48:41.202180   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0703 22:48:41.258861   17336 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0703 22:48:41.258885   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0703 22:48:41.275684   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0703 22:48:41.310924   17336 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:41.310958   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0703 22:48:41.330028   17336 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0703 22:48:41.330058   17336 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0703 22:48:41.650425   17336 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0703 22:48:41.650447   17336 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0703 22:48:41.674597   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0703 22:48:41.674617   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0703 22:48:41.685464   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0703 22:48:41.686467   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.606717364s)
	I0703 22:48:41.686500   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.686510   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.686801   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.686822   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.686833   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.686843   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.687198   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.687246   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.687258   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.693490   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.693514   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.693823   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.693845   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.755645   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:41.891776   17336 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0703 22:48:41.891808   17336 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0703 22:48:41.893902   17336 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0703 22:48:41.893925   17336 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0703 22:48:41.954505   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.820231954s)
	I0703 22:48:41.954551   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.954559   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.954851   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.954870   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:41.954885   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:41.954893   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:41.954854   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.955224   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:41.955258   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:41.955274   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:42.090381   17336 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0703 22:48:42.090399   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0703 22:48:42.195193   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0703 22:48:42.333722   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0703 22:48:42.333744   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0703 22:48:42.574233   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0703 22:48:42.574256   17336 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0703 22:48:42.956006   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0703 22:48:42.956027   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0703 22:48:43.409304   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0703 22:48:43.409332   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0703 22:48:43.813891   17336 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0703 22:48:43.813922   17336 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0703 22:48:44.086864   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0703 22:48:45.073346   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.892093242s)
	I0703 22:48:45.073399   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:45.073414   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:45.073681   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:45.073780   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:45.073802   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:45.073813   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:45.073822   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:45.074137   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:45.074151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:46.913386   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0703 22:48:46.913429   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:46.916270   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:46.916674   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:46.916721   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:46.916826   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:46.917022   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:46.917203   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:46.917336   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:47.335079   17336 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0703 22:48:47.448710   17336 addons.go:234] Setting addon gcp-auth=true in "addons-224553"
	I0703 22:48:47.448768   17336 host.go:66] Checking if "addons-224553" exists ...
	I0703 22:48:47.449210   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:47.449246   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:47.464361   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0703 22:48:47.464929   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:47.465485   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:47.465510   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:47.465889   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:47.466480   17336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 22:48:47.466509   17336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:48:47.482806   17336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0703 22:48:47.483221   17336 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:48:47.483741   17336 main.go:141] libmachine: Using API Version  1
	I0703 22:48:47.483763   17336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:48:47.484122   17336 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:48:47.484368   17336 main.go:141] libmachine: (addons-224553) Calling .GetState
	I0703 22:48:47.486155   17336 main.go:141] libmachine: (addons-224553) Calling .DriverName
	I0703 22:48:47.486403   17336 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0703 22:48:47.486435   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHHostname
	I0703 22:48:47.489294   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:47.489781   17336 main.go:141] libmachine: (addons-224553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:17:a3", ip: ""} in network mk-addons-224553: {Iface:virbr1 ExpiryTime:2024-07-03 23:47:59 +0000 UTC Type:0 Mac:52:54:00:d2:17:a3 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:addons-224553 Clientid:01:52:54:00:d2:17:a3}
	I0703 22:48:47.489812   17336 main.go:141] libmachine: (addons-224553) DBG | domain addons-224553 has defined IP address 192.168.39.226 and MAC address 52:54:00:d2:17:a3 in network mk-addons-224553
	I0703 22:48:47.490022   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHPort
	I0703 22:48:47.490196   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHKeyPath
	I0703 22:48:47.490352   17336 main.go:141] libmachine: (addons-224553) Calling .GetSSHUsername
	I0703 22:48:47.490474   17336 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/addons-224553/id_rsa Username:docker}
	I0703 22:48:48.520052   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.337177521s)
	I0703 22:48:48.520098   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.263163243s)
	I0703 22:48:48.520132   17336 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.108089692s)
	I0703 22:48:48.520152   17336 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.108159677s)
	I0703 22:48:48.520159   17336 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0703 22:48:48.520181   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.000804671s)
	I0703 22:48:48.520202   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520215   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520136   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520281   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520318   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.770432201s)
	I0703 22:48:48.520107   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520339   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520346   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520355   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520422   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.744569261s)
	I0703 22:48:48.520440   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520448   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520503   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.438522753s)
	I0703 22:48:48.520517   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520525   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520580   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.520590   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.520599   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520607   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520609   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.244895008s)
	I0703 22:48:48.520625   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520648   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520675   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.835184153s)
	I0703 22:48:48.520701   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520707   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.520713   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520736   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.520763   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.520771   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520779   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.520812   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.765134113s)
	W0703 22:48:48.520838   17336 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0703 22:48:48.520857   17336 retry.go:31] will retry after 344.93177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0703 22:48:48.520942   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.325716721s)
	I0703 22:48:48.520958   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.520966   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521103   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521121   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521145   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521166   17336 addons.go:475] Verifying addon ingress=true in "addons-224553"
	I0703 22:48:48.521291   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521305   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521316   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521321   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521324   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521375   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521384   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521393   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521401   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521402   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521409   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521410   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521419   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521425   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521233   17336 node_ready.go:35] waiting up to 6m0s for node "addons-224553" to be "Ready" ...
	I0703 22:48:48.521597   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521608   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521687   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521704   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521714   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521740   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.521757   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521764   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521726   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521782   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521790   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.521799   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.521857   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521865   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.521926   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.521936   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.522121   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.522145   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.522151   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.522159   17336 addons.go:475] Verifying addon metrics-server=true in "addons-224553"
	I0703 22:48:48.523466   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.523497   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.523505   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.523512   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.523520   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.524653   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524684   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524692   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524700   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.524707   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.524746   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524752   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524751   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524775   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.524780   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.524782   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.524987   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.524995   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.525370   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.525416   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.525439   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.525445   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.525455   17336 addons.go:475] Verifying addon registry=true in "addons-224553"
	I0703 22:48:48.525976   17336 out.go:177] * Verifying ingress addon...
	I0703 22:48:48.526728   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.526744   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.527714   17336 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-224553 service yakd-dashboard -n yakd-dashboard
	
	I0703 22:48:48.528459   17336 out.go:177] * Verifying registry addon...
	I0703 22:48:48.529125   17336 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0703 22:48:48.530895   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0703 22:48:48.544279   17336 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0703 22:48:48.544301   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:48.544581   17336 node_ready.go:49] node "addons-224553" has status "Ready":"True"
	I0703 22:48:48.544605   17336 node_ready.go:38] duration metric: took 23.04857ms for node "addons-224553" to be "Ready" ...
	I0703 22:48:48.544617   17336 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:48:48.555574   17336 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0703 22:48:48.555605   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:48.584978   17336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.593331   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:48.593354   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:48.593757   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:48.593773   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:48.593789   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:48.632188   17336 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.632214   17336 pod_ready.go:81] duration metric: took 47.208519ms for pod "coredns-7db6d8ff4d-4lgcj" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.632223   17336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.658073   17336 pod_ready.go:92] pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.658093   17336 pod_ready.go:81] duration metric: took 25.864268ms for pod "coredns-7db6d8ff4d-h6q2w" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.658103   17336 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.718601   17336 pod_ready.go:92] pod "etcd-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.718635   17336 pod_ready.go:81] duration metric: took 60.524732ms for pod "etcd-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.718648   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.751864   17336 pod_ready.go:92] pod "kube-apiserver-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.751902   17336 pod_ready.go:81] duration metric: took 33.245273ms for pod "kube-apiserver-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.751916   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.866992   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0703 22:48:48.925435   17336 pod_ready.go:92] pod "kube-controller-manager-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:48.925468   17336 pod_ready.go:81] duration metric: took 173.544287ms for pod "kube-controller-manager-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:48.925481   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ll2cf" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.024077   17336 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-224553" context rescaled to 1 replicas
	I0703 22:48:49.042161   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:49.042169   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:49.328058   17336 pod_ready.go:92] pod "kube-proxy-ll2cf" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:49.328086   17336 pod_ready.go:81] duration metric: took 402.597588ms for pod "kube-proxy-ll2cf" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.328100   17336 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.536447   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:49.542914   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:49.725625   17336 pod_ready.go:92] pod "kube-scheduler-addons-224553" in "kube-system" namespace has status "Ready":"True"
	I0703 22:48:49.725649   17336 pod_ready.go:81] duration metric: took 397.540693ms for pod "kube-scheduler-addons-224553" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:49.725662   17336 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace to be "Ready" ...
	I0703 22:48:50.033025   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:50.038628   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:50.549502   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:50.549633   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:50.935997   17336 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.449569037s)
	I0703 22:48:50.936002   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.849095813s)
	I0703 22:48:50.936124   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:50.936171   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:50.936465   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:50.936511   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:50.936524   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:50.936534   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:50.936573   17336 main.go:141] libmachine: (addons-224553) DBG | Closing plugin on server side
	I0703 22:48:50.936738   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:50.936750   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:50.936759   17336 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-224553"
	I0703 22:48:50.937948   17336 out.go:177] * Verifying csi-hostpath-driver addon...
	I0703 22:48:50.937957   17336 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0703 22:48:50.939835   17336 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0703 22:48:50.940608   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0703 22:48:50.941066   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0703 22:48:50.941086   17336 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0703 22:48:50.963735   17336 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0703 22:48:50.963762   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:51.034662   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:51.053982   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:51.080242   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0703 22:48:51.080275   17336 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0703 22:48:51.187514   17336 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0703 22:48:51.187546   17336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0703 22:48:51.256967   17336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0703 22:48:51.306847   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.439800793s)
	I0703 22:48:51.306908   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:51.306921   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:51.307172   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:51.307189   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:51.307199   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:51.307206   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:51.307511   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:51.307524   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:51.447777   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:51.534170   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:51.535693   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:51.732472   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:51.947074   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.033833   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:52.036335   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:52.464729   17336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.207719585s)
	I0703 22:48:52.464777   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:52.464793   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:52.465141   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:52.465161   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:52.465171   17336 main.go:141] libmachine: Making call to close driver server
	I0703 22:48:52.465179   17336 main.go:141] libmachine: (addons-224553) Calling .Close
	I0703 22:48:52.465431   17336 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:48:52.465488   17336 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:48:52.467494   17336 addons.go:475] Verifying addon gcp-auth=true in "addons-224553"
	I0703 22:48:52.469991   17336 out.go:177] * Verifying gcp-auth addon...
	I0703 22:48:52.471955   17336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0703 22:48:52.511812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.524049   17336 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0703 22:48:52.524075   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:52.549409   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:52.558898   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:52.954579   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:52.976683   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:53.034301   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:53.039609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:53.446738   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:53.475920   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:53.534415   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:53.537046   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:53.737567   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:53.946922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:53.975798   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:54.035208   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:54.039131   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:54.448422   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:54.480086   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:54.540614   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:54.542761   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:54.952362   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:54.975883   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:55.033643   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:55.036663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:55.445733   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:55.475964   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:55.534669   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:55.536367   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:55.947156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:55.975354   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:56.315523   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:56.316904   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:56.319217   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:56.446711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:56.475734   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:56.534842   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:56.537034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:56.946968   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:56.976812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:57.033914   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:57.036923   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:57.447569   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:57.475558   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:57.535031   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:57.537556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:57.946887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:57.975865   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:58.036462   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:58.036725   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:58.450457   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:58.476884   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:58.534355   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:58.536376   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:58.732083   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:48:58.946476   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:58.976118   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:59.034397   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:59.035812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:59.446117   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:59.475067   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:48:59.535852   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:48:59.536290   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:48:59.946866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:48:59.976703   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:00.035325   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:00.035759   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:00.447836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:00.476370   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:00.534143   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:00.535587   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:00.946022   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:00.976436   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:01.037386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:01.037524   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:01.233904   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:01.446417   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:01.476267   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:01.534832   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:01.538034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:01.945529   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:01.976100   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:02.034313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:02.036866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:02.446286   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:02.475848   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:02.534063   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:02.535646   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:02.946211   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:02.975934   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:03.033930   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:03.036585   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:03.445904   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:03.476723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:03.533902   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:03.535870   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:03.734269   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:03.946463   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:03.975905   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:04.033777   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:04.036093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:04.446347   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:04.475958   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:04.536504   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:04.536762   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:04.946139   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:04.975645   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:05.034287   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:05.036556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:05.446268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:05.476195   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:05.534370   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:05.536193   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:05.946998   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:05.975338   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:06.034151   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:06.037541   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:06.232501   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:06.447178   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:06.475600   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:06.533384   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:06.536522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:06.946223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:06.975690   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:07.036226   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:07.037453   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:07.445602   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:07.476630   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:07.534855   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:07.538596   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:07.946441   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:07.976340   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:08.035593   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:08.035630   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:08.447287   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:08.476081   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:08.533789   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:08.535812   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:08.733389   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:08.948672   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:08.975684   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:09.033613   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:09.036705   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:09.447700   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:09.476671   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:09.544067   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:09.548323   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:09.954454   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:09.983814   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:10.053608   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:10.054339   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:10.448236   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:10.475899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:10.534114   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:10.537697   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:10.947166   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:10.975844   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:11.034131   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:11.035959   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:11.233283   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:11.446487   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:11.475795   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:11.533718   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:11.536721   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:11.949324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:11.975899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:12.036603   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:12.037095   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:12.446515   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:12.475965   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:12.534903   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:12.535119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:12.946499   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:12.976088   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:13.036088   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:13.040087   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:13.447191   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:13.480160   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:13.534553   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:13.539300   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:13.732699   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:13.946196   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:13.975337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:14.034568   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:14.035909   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:14.446113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:14.475517   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:14.533501   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:14.536258   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:14.946659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:14.976647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:15.034066   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:15.036778   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:15.446179   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:15.475647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:15.534007   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:15.538773   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:15.951919   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:15.976018   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:16.037605   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:16.040631   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:16.232727   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:16.448690   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:16.476342   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:16.537954   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:16.538241   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:16.946116   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:16.976135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:17.035351   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:17.043002   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:17.447806   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:17.475898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:17.535795   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:17.542072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:17.946075   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:17.975643   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:18.033925   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:18.037796   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:18.233317   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:18.446683   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:18.476439   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:18.533504   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:18.537104   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:18.946659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:18.976834   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:19.033707   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:19.036401   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:19.446173   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:19.475985   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:19.534442   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:19.535991   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:19.946732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:19.975066   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:20.035931   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:20.036425   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:20.451587   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:20.476244   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:20.534800   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:20.537022   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:20.731511   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:20.946800   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:20.975248   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:21.035088   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:21.045919   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:21.448049   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:21.476410   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:21.535565   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:21.537220   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:21.946456   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:21.976041   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:22.034189   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:22.035713   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:22.446573   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:22.476945   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:22.534205   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:22.536609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:22.732321   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:22.946372   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:22.975753   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:23.033690   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:23.037343   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:23.447557   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:23.476543   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:23.533427   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:23.535905   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:23.946261   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:23.975637   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:24.033989   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:24.036746   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:24.446929   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:24.476093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:24.533914   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:24.535570   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:24.732586   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:24.947337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:24.977052   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:25.034857   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:25.037248   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:25.449316   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:25.477348   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:25.535468   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:25.536454   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:25.945964   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:25.975486   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:26.033748   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:26.036154   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:26.446060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:26.475489   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:26.533290   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:26.535994   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:26.951399   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:26.976632   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:27.033576   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:27.035970   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:27.232145   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:27.446322   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:27.476079   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:27.534079   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:27.535277   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:27.946898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:27.975368   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:28.032973   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:28.036331   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:28.447113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:28.475898   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:28.534077   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:28.535976   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:28.945899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:28.975747   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:29.033865   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:29.036479   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:29.232696   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:29.447386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:29.475967   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:29.534676   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:29.537882   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:29.946928   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:29.975412   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:30.034335   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:30.035372   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:30.447694   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:30.476425   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:30.533430   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:30.536366   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:30.946810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:30.975695   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:31.034318   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:31.036520   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:31.448880   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:31.475861   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:31.533999   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:31.536745   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:31.732756   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:31.945711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:31.976779   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:32.034077   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:32.036977   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:32.446773   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:32.475348   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:32.535781   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:32.536028   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:32.946161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:32.976113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:33.035997   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:33.036128   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:33.447484   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:33.475822   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:33.534590   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:33.536437   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:33.946974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:33.975629   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:34.033809   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:34.037275   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:34.234546   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:34.448266   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:34.475187   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:34.536526   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:34.537012   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:34.946611   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:34.977720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:35.033543   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:35.035677   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:35.446515   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:35.477400   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:35.533804   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:35.536118   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:35.946412   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:35.976472   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:36.034183   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:36.037147   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:36.448561   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:36.476021   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:36.534447   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:36.538060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:36.731848   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:36.945571   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:36.976501   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:37.033632   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:37.035720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:37.446613   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:37.476115   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:37.534211   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:37.535121   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:37.946320   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:37.977430   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:38.034254   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:38.035556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:38.446481   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:38.476490   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:38.533498   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:38.538257   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:38.946579   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:38.976328   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:39.034213   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:39.036034   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:39.232261   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:39.446279   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:39.477472   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:39.533179   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:39.535556   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:39.946647   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:39.976369   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:40.033502   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:40.036611   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:40.447269   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:40.481153   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:40.536974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:40.537793   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:40.946874   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:40.976419   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:41.033660   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:41.036024   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:41.447072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:41.475747   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:41.533699   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:41.536582   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:41.732214   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:41.946569   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:41.977530   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:42.033882   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:42.036359   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:42.446636   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:42.476134   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:42.535009   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:42.535035   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:42.947367   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:42.978559   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:43.033702   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:43.036269   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:43.445893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:43.475309   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:43.534629   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:43.536785   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:43.732881   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:43.945991   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:43.975455   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:44.033212   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:44.036873   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:44.446350   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:44.476861   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:44.533735   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:44.537007   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:44.947630   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:44.976215   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:45.034164   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:45.035251   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:45.447080   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:45.475946   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:45.534041   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:45.536618   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:45.733649   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:45.947663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:45.976897   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:46.034278   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:46.037083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:46.448350   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:46.574743   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:46.576323   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:46.576666   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:46.948135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:46.976382   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:47.035707   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:47.037364   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:47.449718   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:47.476497   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:47.534110   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:47.536278   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:47.946725   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:47.976156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:48.034268   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:48.035739   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:48.232160   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:48.447341   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:48.476223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:48.535017   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:48.536661   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:48.946781   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:48.975324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:49.034411   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:49.036729   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:49.446667   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:49.476440   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:49.533112   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:49.536362   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:49.946836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:49.975675   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:50.033793   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:50.036206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:50.232834   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:50.449637   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:50.476041   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:50.534163   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:50.535329   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:50.947259   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:50.976259   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:51.034564   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:51.036177   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:51.453056   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:51.475349   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:51.538321   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:51.540724   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:51.946750   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:51.976268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:52.043662   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:52.045814   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:52.452119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:52.478768   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:52.534079   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:52.538523   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:52.733155   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:52.948096   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:52.976095   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:53.034514   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:53.037687   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:53.447616   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:53.479498   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:53.537047   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:53.541794   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:53.946531   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:53.977036   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:54.035137   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:54.036088   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:54.447243   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:54.475566   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:54.533338   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:54.536239   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:54.947260   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:54.976113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:55.034525   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:55.035340   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:55.232186   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:55.446922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:55.476146   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:55.534566   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:55.536011   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:55.949684   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:55.976405   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:56.033422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:56.035732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:56.447913   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:56.475846   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:56.534097   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:56.536300   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:56.946774   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:56.975969   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:57.034272   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:57.036557   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:57.232638   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:57.446659   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:57.476822   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:57.534775   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:57.536440   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:57.946872   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:57.975791   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:58.033912   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:58.035866   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:58.446713   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:58.476711   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:58.533697   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:58.536301   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:58.947203   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:58.975817   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:59.034080   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:59.036425   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:59.233377   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:49:59.447137   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:59.475887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:49:59.534042   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:49:59.537315   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:49:59.947181   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:49:59.975903   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:00.036054   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:00.040391   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:00.447209   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:00.475844   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:00.534580   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:00.537211   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:00.946644   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:00.976127   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:01.034186   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:01.035757   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:01.233578   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:01.449723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:01.476914   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:01.534603   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:01.536352   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:01.948070   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:01.978615   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:02.035013   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:02.036480   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:02.446274   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:02.477591   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:02.536083   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:02.539408   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:02.947092   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:02.976206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:03.034255   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:03.036097   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:03.448090   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:03.477395   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:03.533975   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:03.542745   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:03.732518   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:03.947913   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:03.978673   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:04.034308   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:04.036665   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:04.446732   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:04.475083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:04.534999   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:04.537173   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:04.945887   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:04.975573   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:05.033524   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:05.036874   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:05.446929   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:05.476890   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:05.533879   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:05.536155   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:05.733511   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:05.950535   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:05.982352   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:06.034251   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:06.038084   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:06.449334   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:06.476130   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:06.535602   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:06.536748   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:06.946865   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:06.976450   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:07.033355   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:07.035780   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:07.446544   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:07.476224   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:07.534244   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:07.536093   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:07.734089   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:07.946695   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:07.976834   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:08.033905   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:08.037125   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:08.447795   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:08.475432   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:08.534682   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:08.536221   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:08.946382   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:08.976184   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:09.034577   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:09.036786   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:09.450317   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:09.476168   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:09.536946   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:09.537129   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:09.947721   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:09.976513   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:10.033232   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:10.037337   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:10.232624   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:10.447268   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:10.475809   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:10.533709   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:10.538473   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:10.948055   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:10.976016   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:11.034094   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:11.039029   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:11.453949   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:11.476566   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:11.534635   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:11.538193   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:11.946590   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:11.976396   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:12.034274   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:12.038487   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:12.233290   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:12.447051   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:12.476072   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:12.535450   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:12.537975   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:12.946943   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:12.975661   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:13.033775   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:13.035974   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:13.447387   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:13.476408   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:13.534044   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:13.535985   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:13.947107   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:13.975733   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:14.033864   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:14.036353   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:14.446766   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:14.476006   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:14.535898   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:14.536109   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:14.732758   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:14.946307   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:14.975669   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:15.033826   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:15.036161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:15.445720   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:15.476500   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:15.534008   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:15.538083   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:15.946641   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:15.975953   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:16.033881   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:16.037482   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:16.446265   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:16.475730   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:16.533819   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:16.540204   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:16.945522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:16.975965   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:17.033910   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:17.040060   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:17.232091   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:17.446470   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:17.476169   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:17.534228   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:17.536157   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.367410   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:18.374899   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.377384   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.377808   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:18.446925   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.475864   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:18.534452   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:18.536978   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:18.947019   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:18.976701   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:19.033434   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:19.036724   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:19.233372   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:19.446937   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:19.476156   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:19.534548   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:19.535920   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:19.946517   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:19.976831   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:20.034577   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:20.037550   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:20.447482   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:20.476522   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:20.533678   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:20.537696   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:20.953810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:20.977223   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:21.037516   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:21.037526   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:21.448292   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:21.476324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:21.536554   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:21.540559   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:21.737009   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:21.946715   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:21.975033   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:22.033973   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:22.035537   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:22.802113   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:22.809270   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:22.809939   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:22.812001   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:22.945873   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:22.976999   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:23.034065   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:23.036033   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:23.445836   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:23.475385   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:23.533306   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:23.536474   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:23.969135   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:23.980654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:24.047282   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:24.053762   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:24.232389   17336 pod_ready.go:102] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"False"
	I0703 22:50:24.460176   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:24.475938   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:24.533846   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:24.537723   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:24.731797   17336 pod_ready.go:92] pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace has status "Ready":"True"
	I0703 22:50:24.731820   17336 pod_ready.go:81] duration metric: took 1m35.006150001s for pod "metrics-server-c59844bb4-qv65x" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.731830   17336 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.737848   17336 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace has status "Ready":"True"
	I0703 22:50:24.737866   17336 pod_ready.go:81] duration metric: took 6.029788ms for pod "nvidia-device-plugin-daemonset-sbhcl" in "kube-system" namespace to be "Ready" ...
	I0703 22:50:24.737885   17336 pod_ready.go:38] duration metric: took 1m36.193250311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:50:24.737903   17336 api_server.go:52] waiting for apiserver process to appear ...
	I0703 22:50:24.737944   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:24.737993   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:24.822203   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:24.822224   17336 cri.go:89] found id: ""
	I0703 22:50:24.822232   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:24.822277   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:24.829085   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:24.829155   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:24.905207   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:24.905230   17336 cri.go:89] found id: ""
	I0703 22:50:24.905238   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:24.905281   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:24.925589   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:24.925643   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:25.005660   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:25.005683   17336 cri.go:89] found id: ""
	I0703 22:50:25.005692   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:25.005746   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.010724   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:25.010773   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:25.061985   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:25.062009   17336 cri.go:89] found id: ""
	I0703 22:50:25.062019   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:25.062093   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.068711   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:25.068780   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:25.130089   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.130447   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.138228   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:25.140040   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:25.180997   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:25.181032   17336 cri.go:89] found id: ""
	I0703 22:50:25.181044   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:25.181102   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.213268   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:25.213331   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:25.325462   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:25.325488   17336 cri.go:89] found id: ""
	I0703 22:50:25.325504   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:25.325550   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:25.347757   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:25.347822   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:25.409473   17336 cri.go:89] found id: ""
	I0703 22:50:25.409500   17336 logs.go:276] 0 containers: []
	W0703 22:50:25.409512   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:25.409520   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:25.409533   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:25.446429   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.477265   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.494614   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:25.494644   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:25.533949   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:25.536838   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:25.582956   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:25.582990   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:25.946523   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:25.975845   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:25.999960   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:25.999995   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0703 22:50:26.035646   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:26.037334   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0703 22:50:26.083096   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083267   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083405   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.083554   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:26.103776   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:26.103801   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:26.285398   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:26.285434   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:26.357109   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:26.357147   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:26.416892   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:26.416929   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:26.437323   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:26.437356   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:26.447047   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:26.476810   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:26.515490   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:26.515537   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:26.534059   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:26.535531   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0703 22:50:26.625316   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:26.625359   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:26.768987   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:26.769028   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:26.769092   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:26.769106   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769120   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769133   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:26.769144   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:26.769151   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:26.769160   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:26.950107   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:26.975672   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:27.033694   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:27.036992   17336 kapi.go:107] duration metric: took 1m38.506095938s to wait for kubernetes.io/minikube-addons=registry ...
	I0703 22:50:27.446315   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:27.475719   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:27.534016   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:27.952654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:27.976341   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:28.034054   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:28.446433   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:28.477485   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:28.533098   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:28.948017   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:28.975988   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:29.041303   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:29.447722   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:29.476001   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:29.533998   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:29.946581   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:29.976180   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:30.034273   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:30.447950   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:30.475934   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:30.533730   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:30.947483   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:30.977893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:31.033760   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:31.449576   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:31.476609   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:31.534639   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:31.945714   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:31.976002   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:32.041907   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:32.447602   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:32.477082   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:32.534845   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:32.946335   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:32.975922   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:33.033916   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:33.446387   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:33.477893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:33.533963   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:33.951636   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:33.975986   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:34.034002   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:34.446699   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:34.476160   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:34.534075   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:34.945577   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:34.977765   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:35.034215   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:35.450802   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:35.480165   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:35.533510   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:35.947233   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:35.977744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:36.033632   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:36.455621   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:36.476845   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:36.533495   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:36.770422   17336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:50:36.803895   17336 api_server.go:72] duration metric: took 1m57.15967772s to wait for apiserver process to appear ...
	I0703 22:50:36.803925   17336 api_server.go:88] waiting for apiserver healthz status ...
	I0703 22:50:36.803953   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:36.804007   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:36.884791   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:36.884816   17336 cri.go:89] found id: ""
	I0703 22:50:36.884826   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:36.884882   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:36.891213   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:36.891273   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:36.947218   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:36.962408   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:36.962429   17336 cri.go:89] found id: ""
	I0703 22:50:36.962438   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:36.962492   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:36.967563   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:36.967625   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:36.977124   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:37.034376   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:37.039586   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:37.039604   17336 cri.go:89] found id: ""
	I0703 22:50:37.039611   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:37.039669   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.051800   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:37.051899   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:37.134033   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:37.134052   17336 cri.go:89] found id: ""
	I0703 22:50:37.134061   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:37.134118   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.141221   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:37.141293   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:37.214493   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:37.214516   17336 cri.go:89] found id: ""
	I0703 22:50:37.214523   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:37.214585   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.220005   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:37.220065   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:37.263993   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:37.264018   17336 cri.go:89] found id: ""
	I0703 22:50:37.264027   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:37.264089   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:37.268739   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:37.268802   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:37.314334   17336 cri.go:89] found id: ""
	I0703 22:50:37.314359   17336 logs.go:276] 0 containers: []
	W0703 22:50:37.314366   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:37.314373   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:37.314384   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:37.373659   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:37.373690   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:37.418095   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:37.418122   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:37.469219   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:37.477491   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:37.477526   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:37.487827   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:37.533419   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:37.565411   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:37.565446   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0703 22:50:37.632687   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.632857   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.632991   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:37.633138   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:37.656527   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:37.656573   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:37.680228   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:37.680258   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:37.806676   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:37.806712   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:37.947583   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:37.975923   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:38.034422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:38.114716   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:38.114748   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:38.216429   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:38.216461   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:38.450576   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:38.480757   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:38.500584   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:38.500613   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:38.534194   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:38.590054   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:38.590084   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:38.590134   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:38.590146   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590154   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590161   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:38.590168   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:38.590174   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:38.590180   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:38.946470   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:38.976079   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:39.034592   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:39.447744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:39.478094   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:39.533853   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:39.946546   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:39.976325   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:40.039178   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:40.446653   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:40.476179   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:40.537066   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:40.963990   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:40.983324   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:41.036210   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:41.446275   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:41.475842   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:41.535534   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:41.947593   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:41.977331   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:42.034508   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:42.447100   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:42.482651   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:42.533659   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:42.948161   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:42.976651   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:43.033474   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:43.446558   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:43.476291   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:43.540326   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:43.948530   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:43.976744   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:44.033691   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:44.447528   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:44.477980   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:44.533899   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:44.946792   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:44.980206   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:45.033900   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:45.446622   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:45.476435   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:45.533761   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:45.945969   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:45.975981   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:46.036366   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:46.447386   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:46.476529   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:46.534608   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:46.947727   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:46.978134   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:47.033932   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:47.447113   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:47.476488   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:47.533446   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:47.946623   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0703 22:50:47.978537   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.034209   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:48.446625   17336 kapi.go:107] duration metric: took 1m57.506014102s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0703 22:50:48.476167   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.533970   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:48.591426   17336 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0703 22:50:48.596929   17336 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0703 22:50:48.597909   17336 api_server.go:141] control plane version: v1.30.2
	I0703 22:50:48.597929   17336 api_server.go:131] duration metric: took 11.793998606s to wait for apiserver health ...
	I0703 22:50:48.597937   17336 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 22:50:48.597956   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 22:50:48.597998   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 22:50:48.642355   17336 cri.go:89] found id: "e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:48.642378   17336 cri.go:89] found id: ""
	I0703 22:50:48.642387   17336 logs.go:276] 1 containers: [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd]
	I0703 22:50:48.642442   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.647082   17336 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 22:50:48.647141   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 22:50:48.690501   17336 cri.go:89] found id: "4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:48.690530   17336 cri.go:89] found id: ""
	I0703 22:50:48.690541   17336 logs.go:276] 1 containers: [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986]
	I0703 22:50:48.690609   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.694861   17336 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 22:50:48.694919   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 22:50:48.738851   17336 cri.go:89] found id: "9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:48.738877   17336 cri.go:89] found id: ""
	I0703 22:50:48.738887   17336 logs.go:276] 1 containers: [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8]
	I0703 22:50:48.738945   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.743224   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 22:50:48.743298   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 22:50:48.787368   17336 cri.go:89] found id: "ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:48.787392   17336 cri.go:89] found id: ""
	I0703 22:50:48.787400   17336 logs.go:276] 1 containers: [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a]
	I0703 22:50:48.787448   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.792167   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 22:50:48.792241   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 22:50:48.842186   17336 cri.go:89] found id: "b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:48.842213   17336 cri.go:89] found id: ""
	I0703 22:50:48.842221   17336 logs.go:276] 1 containers: [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98]
	I0703 22:50:48.842277   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.846478   17336 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 22:50:48.846549   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 22:50:48.889257   17336 cri.go:89] found id: "aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:48.889284   17336 cri.go:89] found id: ""
	I0703 22:50:48.889295   17336 logs.go:276] 1 containers: [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc]
	I0703 22:50:48.889359   17336 ssh_runner.go:195] Run: which crictl
	I0703 22:50:48.894028   17336 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 22:50:48.894108   17336 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 22:50:48.948768   17336 cri.go:89] found id: ""
	I0703 22:50:48.948793   17336 logs.go:276] 0 containers: []
	W0703 22:50:48.948801   17336 logs.go:278] No container was found matching "kindnet"
	I0703 22:50:48.948809   17336 logs.go:123] Gathering logs for kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] ...
	I0703 22:50:48.948821   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98"
	I0703 22:50:48.977663   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:48.989746   17336 logs.go:123] Gathering logs for CRI-O ...
	I0703 22:50:48.989773   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 22:50:49.035419   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:49.370921   17336 logs.go:123] Gathering logs for kubelet ...
	I0703 22:50:49.370958   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0703 22:50:49.431010   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431178   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431318   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.431465   17336 logs.go:138] Found kubelet problem: Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:49.459429   17336 logs.go:123] Gathering logs for describe nodes ...
	I0703 22:50:49.459451   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0703 22:50:49.479136   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:49.534954   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:49.582449   17336 logs.go:123] Gathering logs for kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] ...
	I0703 22:50:49.582490   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd"
	I0703 22:50:49.632358   17336 logs.go:123] Gathering logs for etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] ...
	I0703 22:50:49.632408   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986"
	I0703 22:50:49.699347   17336 logs.go:123] Gathering logs for container status ...
	I0703 22:50:49.699395   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 22:50:49.765145   17336 logs.go:123] Gathering logs for dmesg ...
	I0703 22:50:49.765187   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 22:50:49.780726   17336 logs.go:123] Gathering logs for coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] ...
	I0703 22:50:49.780761   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8"
	I0703 22:50:49.827018   17336 logs.go:123] Gathering logs for kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] ...
	I0703 22:50:49.827051   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a"
	I0703 22:50:49.877013   17336 logs.go:123] Gathering logs for kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] ...
	I0703 22:50:49.877056   17336 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc"
	I0703 22:50:49.955986   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:49.956015   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0703 22:50:49.956074   17336 out.go:239] X Problems detected in kubelet:
	W0703 22:50:49.956090   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.863888    1274 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956105   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.863927    1274 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956116   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: W0703 22:48:45.864006    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	W0703 22:50:49.956128   17336 out.go:239]   Jul 03 22:48:45 addons-224553 kubelet[1274]: E0703 22:48:45.864017    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-224553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-224553' and this object
	I0703 22:50:49.956138   17336 out.go:304] Setting ErrFile to fd 2...
	I0703 22:50:49.956150   17336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:50:49.976460   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:50.033004   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:50.476466   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:50.534748   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:50.976152   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:51.033981   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:51.476198   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:51.534706   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:51.976132   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:52.034029   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:52.478578   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:52.533613   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:52.975749   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:53.033689   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:53.476228   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:53.534804   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:53.976264   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:54.034559   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:54.475823   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:54.534650   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:54.977064   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:55.034751   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:55.476624   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:55.533857   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:55.976303   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:56.034476   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:56.477654   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:56.533812   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:56.977119   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:57.034746   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:57.477477   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:57.533313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:57.975675   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:58.033547   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:58.482286   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:58.541917   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:58.975943   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.034115   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:59.475571   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.533794   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:50:59.968951   17336 system_pods.go:59] 18 kube-system pods found
	I0703 22:50:59.968983   17336 system_pods.go:61] "coredns-7db6d8ff4d-4lgcj" [e61e787d-1169-403e-b844-fc0bbd9acd53] Running
	I0703 22:50:59.968988   17336 system_pods.go:61] "csi-hostpath-attacher-0" [a732ee6a-3989-47bc-8045-b0bff06ce3a8] Running
	I0703 22:50:59.968991   17336 system_pods.go:61] "csi-hostpath-resizer-0" [a87e8cb8-6d1c-4ea9-8ed5-f4b57047b25c] Running
	I0703 22:50:59.968994   17336 system_pods.go:61] "csi-hostpathplugin-7m9sj" [a4df336e-c37d-4791-aa35-c5c94fec899d] Running
	I0703 22:50:59.968998   17336 system_pods.go:61] "etcd-addons-224553" [30dfb5b9-60dc-48d6-a7cf-da22586e912f] Running
	I0703 22:50:59.969001   17336 system_pods.go:61] "kube-apiserver-addons-224553" [a41530ad-6337-409d-84af-c9448ccdb391] Running
	I0703 22:50:59.969004   17336 system_pods.go:61] "kube-controller-manager-addons-224553" [3338cf19-da7c-4a93-9a72-75fd5e3a4003] Running
	I0703 22:50:59.969007   17336 system_pods.go:61] "kube-ingress-dns-minikube" [a43e86c9-2281-41ce-a535-a1913563dd49] Running
	I0703 22:50:59.969010   17336 system_pods.go:61] "kube-proxy-ll2cf" [a5b82480-c0ed-4129-b570-a2f3d3a64d9e] Running
	I0703 22:50:59.969013   17336 system_pods.go:61] "kube-scheduler-addons-224553" [35b790ae-c539-416d-8644-8ac5a75be87d] Running
	I0703 22:50:59.969017   17336 system_pods.go:61] "metrics-server-c59844bb4-qv65x" [78c1c74d-f40a-4283-8091-ecace04f1283] Running
	I0703 22:50:59.969021   17336 system_pods.go:61] "nvidia-device-plugin-daemonset-sbhcl" [71040d78-0cef-4e87-863c-271f1ea0dc3f] Running
	I0703 22:50:59.969024   17336 system_pods.go:61] "registry-p9skr" [d68fdfd4-7879-4930-8113-149c5c04b06a] Running
	I0703 22:50:59.969027   17336 system_pods.go:61] "registry-proxy-zj8bk" [2cccffc8-167d-483e-81c9-bcb8a862200f] Running
	I0703 22:50:59.969030   17336 system_pods.go:61] "snapshot-controller-745499f584-jq4z5" [c9adb0c6-984a-498a-8703-b47979144b23] Running
	I0703 22:50:59.969034   17336 system_pods.go:61] "snapshot-controller-745499f584-l6f2b" [519dca42-0117-49cc-90ae-e3b4f43b2a38] Running
	I0703 22:50:59.969037   17336 system_pods.go:61] "storage-provisioner" [05e06fda-a0cf-4385-8cc1-55d7f00dbd4b] Running
	I0703 22:50:59.969041   17336 system_pods.go:61] "tiller-deploy-6677d64bcd-4g4h4" [2a14a1e3-ef96-40b2-b4ba-2790881ec44c] Running
	I0703 22:50:59.969047   17336 system_pods.go:74] duration metric: took 11.371104382s to wait for pod list to return data ...
	I0703 22:50:59.969057   17336 default_sa.go:34] waiting for default service account to be created ...
	I0703 22:50:59.971113   17336 default_sa.go:45] found service account: "default"
	I0703 22:50:59.971132   17336 default_sa.go:55] duration metric: took 2.07021ms for default service account to be created ...
	I0703 22:50:59.971139   17336 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 22:50:59.978059   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:50:59.985486   17336 system_pods.go:86] 18 kube-system pods found
	I0703 22:50:59.985523   17336 system_pods.go:89] "coredns-7db6d8ff4d-4lgcj" [e61e787d-1169-403e-b844-fc0bbd9acd53] Running
	I0703 22:50:59.985529   17336 system_pods.go:89] "csi-hostpath-attacher-0" [a732ee6a-3989-47bc-8045-b0bff06ce3a8] Running
	I0703 22:50:59.985533   17336 system_pods.go:89] "csi-hostpath-resizer-0" [a87e8cb8-6d1c-4ea9-8ed5-f4b57047b25c] Running
	I0703 22:50:59.985538   17336 system_pods.go:89] "csi-hostpathplugin-7m9sj" [a4df336e-c37d-4791-aa35-c5c94fec899d] Running
	I0703 22:50:59.985544   17336 system_pods.go:89] "etcd-addons-224553" [30dfb5b9-60dc-48d6-a7cf-da22586e912f] Running
	I0703 22:50:59.985551   17336 system_pods.go:89] "kube-apiserver-addons-224553" [a41530ad-6337-409d-84af-c9448ccdb391] Running
	I0703 22:50:59.985558   17336 system_pods.go:89] "kube-controller-manager-addons-224553" [3338cf19-da7c-4a93-9a72-75fd5e3a4003] Running
	I0703 22:50:59.985565   17336 system_pods.go:89] "kube-ingress-dns-minikube" [a43e86c9-2281-41ce-a535-a1913563dd49] Running
	I0703 22:50:59.985571   17336 system_pods.go:89] "kube-proxy-ll2cf" [a5b82480-c0ed-4129-b570-a2f3d3a64d9e] Running
	I0703 22:50:59.985577   17336 system_pods.go:89] "kube-scheduler-addons-224553" [35b790ae-c539-416d-8644-8ac5a75be87d] Running
	I0703 22:50:59.985585   17336 system_pods.go:89] "metrics-server-c59844bb4-qv65x" [78c1c74d-f40a-4283-8091-ecace04f1283] Running
	I0703 22:50:59.985590   17336 system_pods.go:89] "nvidia-device-plugin-daemonset-sbhcl" [71040d78-0cef-4e87-863c-271f1ea0dc3f] Running
	I0703 22:50:59.985595   17336 system_pods.go:89] "registry-p9skr" [d68fdfd4-7879-4930-8113-149c5c04b06a] Running
	I0703 22:50:59.985599   17336 system_pods.go:89] "registry-proxy-zj8bk" [2cccffc8-167d-483e-81c9-bcb8a862200f] Running
	I0703 22:50:59.985604   17336 system_pods.go:89] "snapshot-controller-745499f584-jq4z5" [c9adb0c6-984a-498a-8703-b47979144b23] Running
	I0703 22:50:59.985608   17336 system_pods.go:89] "snapshot-controller-745499f584-l6f2b" [519dca42-0117-49cc-90ae-e3b4f43b2a38] Running
	I0703 22:50:59.985614   17336 system_pods.go:89] "storage-provisioner" [05e06fda-a0cf-4385-8cc1-55d7f00dbd4b] Running
	I0703 22:50:59.985618   17336 system_pods.go:89] "tiller-deploy-6677d64bcd-4g4h4" [2a14a1e3-ef96-40b2-b4ba-2790881ec44c] Running
	I0703 22:50:59.985627   17336 system_pods.go:126] duration metric: took 14.483606ms to wait for k8s-apps to be running ...
	I0703 22:50:59.985636   17336 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 22:50:59.985682   17336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 22:51:00.001367   17336 system_svc.go:56] duration metric: took 15.722387ms WaitForService to wait for kubelet
	I0703 22:51:00.001394   17336 kubeadm.go:576] duration metric: took 2m20.357181851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:51:00.001419   17336 node_conditions.go:102] verifying NodePressure condition ...
	I0703 22:51:00.006620   17336 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 22:51:00.006648   17336 node_conditions.go:123] node cpu capacity is 2
	I0703 22:51:00.006660   17336 node_conditions.go:105] duration metric: took 5.236656ms to run NodePressure ...
	I0703 22:51:00.006673   17336 start.go:240] waiting for startup goroutines ...
	I0703 22:51:00.035655   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:00.479975   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:00.533866   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:00.976435   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:01.034426   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:01.475312   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:01.534000   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:01.976797   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:02.033723   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:02.476256   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:02.534882   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:02.977247   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:03.034422   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:03.476893   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:03.534639   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:03.976528   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:04.033786   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:04.475841   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:04.534031   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:04.978120   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:05.034313   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:05.475388   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:05.534367   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:05.976403   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:06.033423   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:06.476170   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:06.533926   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:06.975768   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:07.033807   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:07.476043   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:07.534225   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:07.977524   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:08.037803   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:08.497373   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:08.533751   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:08.976697   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:09.033700   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:09.896609   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:09.896763   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:09.976136   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:10.034019   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:10.476180   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:10.534170   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:10.975802   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:11.033823   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:11.475680   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:11.533740   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.421597   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:12.422395   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.482549   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:12.542004   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:12.976712   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:13.036568   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:13.475685   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:13.533943   17336 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0703 22:51:13.976395   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:14.033182   17336 kapi.go:107] duration metric: took 2m25.504054698s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0703 22:51:14.476491   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:14.976551   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:15.476854   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:15.976438   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:16.475981   17336 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0703 22:51:16.976612   17336 kapi.go:107] duration metric: took 2m24.504657336s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0703 22:51:16.978472   17336 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-224553 cluster.
	I0703 22:51:16.979716   17336 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0703 22:51:16.980847   17336 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0703 22:51:16.981963   17336 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0703 22:51:16.983000   17336 addons.go:510] duration metric: took 2m37.338723052s for enable addons: enabled=[default-storageclass nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0703 22:51:16.983027   17336 start.go:245] waiting for cluster config update ...
	I0703 22:51:16.983043   17336 start.go:254] writing updated cluster config ...
	I0703 22:51:16.983270   17336 ssh_runner.go:195] Run: rm -f paused
	I0703 22:51:17.033658   17336 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 22:51:17.035567   17336 out.go:177] * Done! kubectl is now configured to use "addons-224553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.544785971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc7356cd-18fe-4cd7-ab14-27bab3a149a7 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.546425538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e5a7aba-c8be-4fd0-a704-0991d9866a98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.547643222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047424547614179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e5a7aba-c8be-4fd0-a704-0991d9866a98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.548288039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b3d152-6113-41ae-957f-379907c87c37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.548470737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b3d152-6113-41ae-957f-379907c87c37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.548768931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17200
47019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSan
dboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b759
0bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249bcf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16b3d152-6113-41ae-957f-379907c87c37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568472791Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1.5XLNQ2\"" file="server/server.go:805"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568525971Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1.5XLNQ2\"" file="server/server.go:805"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568572880Z" level=debug msg="Container or sandbox exited: 82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1.5XLNQ2" file="server/server.go:810"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568810798Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1\"" file="server/server.go:805"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568850939Z" level=debug msg="Container or sandbox exited: 82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1" file="server/server.go:810"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568873247Z" level=debug msg="container exited and found: 82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1" file="server/server.go:825"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.568908932Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1.5XLNQ2\"" file="server/server.go:805"
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.577540560Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=915f0dbb-918d-4339-9119-e373f95c34a5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.577846867Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-bp4c7,Uid:247bf170-0735-4073-ae3b-a13c60e4856e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047242826528091,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:54:02.495099733Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&PodSandboxMetadata{Name:headlamp-7867546754-jgcbc,Uid:1b695b04-d5ab-420b-8a5e-b5b4d5061b10,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047108362622029,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,pod-template-hash: 7867546754,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:51:48.050937282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:899aaab3-f1d8-46f2-ae17-b22a85faa208,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047098844458462,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
07-03T22:51:38.536117424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-r8pwn,Uid:991d5fad-2189-401e-a80f-68d5d68c19a2,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047072489288804,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:48:52.415490501Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-799879c74f-fwg4s,Uid:d1102c91-2165-4a2c-adbf-945b4db26c0e,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1720046927423575434,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,pod-template-hash: 799879c74f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:48:46.807053944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-qv65x,Uid:78c1c74d-f40a-4283-8091-ecace04f1283,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046926337611797,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:48:45.720140272Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046925824086067,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-03T22:48:45.095951462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4lgcj,Uid:e61e787d-1169-403e-b844-fc0bbd9acd53,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046920809244794,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd
9acd53,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:48:40.460607811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&PodSandboxMetadata{Name:kube-proxy-ll2cf,Uid:a5b82480-c0ed-4129-b570-a2f3d3a64d9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046919608172993,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T22:48:39.296284855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc4e249bcf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-224553
,Uid:e37d97992a7cb908d598d3286e8564ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046899392590030,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.226:8443,kubernetes.io/config.hash: e37d97992a7cb908d598d3286e8564ec,kubernetes.io/config.seen: 2024-07-03T22:48:18.898275410Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-224553,Uid:090c0fd627e1381212e5d65203a04f22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046899376797339,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 090c0fd627e1381212e5d65203a04f22,kubernetes.io/config.seen: 2024-07-03T22:48:18.898385116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f49b94871cf9d37f57cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-224553,Uid:de10e56abb835b85e60ca6ab00f4f6f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046899374393598,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de10e56abb835b85e60ca6ab00f4f6f6,kubernetes.io/config.
seen: 2024-07-03T22:48:18.898382092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:073d359ecbe7d6dccf2362b7590bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&PodSandboxMetadata{Name:etcd-addons-224553,Uid:dff0448fe42247eb979c2fd89936b6fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720046899372644757,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.226:2379,kubernetes.io/config.hash: dff0448fe42247eb979c2fd89936b6fb,kubernetes.io/config.seen: 2024-07-03T22:48:18.898270099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=915f0dbb-918d-4339-9119-e373f95c34a5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.578723419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a8f20ee-2952-47b0-80b0-96c00d27d7b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.578803001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a8f20ee-2952-47b0-80b0-96c00d27d7b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.579131260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17200
47019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSan
dboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b759
0bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249bcf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a8f20ee-2952-47b0-80b0-96c00d27d7b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.602591887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e259fc1-4352-4fb2-95cd-627f30f68614 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.602686900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e259fc1-4352-4fb2-95cd-627f30f68614 name=/runtime.v1.RuntimeService/Version
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.604101705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f79bc213-1c34-4055-8648-c707bc38b17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.605447889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720047424605419661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584678,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f79bc213-1c34-4055-8648-c707bc38b17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.606023509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d95508c-7420-4ef8-9dad-aac5f2736ae2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.606080881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d95508c-7420-4ef8-9dad-aac5f2736ae2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 22:57:04 addons-224553 crio[682]: time="2024-07-03 22:57:04.606461462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f52235fa0a9613b937fdbbd17e19c5339d3894877bc2ce3ee756b8c4f3400a2b,PodSandboxId:68614950f41c59c3c8ba3242e854be0e4bf43cb4b7bb5cd7696de1c0df39d208,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720047246339888860,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-bp4c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 247bf170-0735-4073-ae3b-a13c60e4856e,},Annotations:map[string]string{io.kubernetes.container.hash: 2f85a108,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a22a23f9b9158ac0e97415e2d2d1c1480b547c2d2e6446fd46e866155b9ba88b,PodSandboxId:41c255833c60d2d68345a05c9cbd26bdc02e65b0e625217558b6f6b05fbf830c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720047113845846349,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-jgcbc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1b695b04-d5ab-420b-8a5e-b5b4d5061b10,},Annota
tions:map[string]string{io.kubernetes.container.hash: 291157fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad572c678c9cfcffc7496c7f4eb5376076ce60091f4a611eacdd724e69dea207,PodSandboxId:e1acd9efaa5936887236ca9544ecbf9d75822dbb764a2594e24685cbb634f59d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720047103552714987,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 899aaab3-f1d8-46f2-ae17-b22a85faa208,},Annotations:map[string]string{io.kubernetes.container.hash: 66e0cbc2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d,PodSandboxId:abecec91949461e40de39cc9e86d2544aef37c45bed78393279e2b0f53bd883a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720047076542978051,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-r8pwn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 991d5fad-2189-401e-a80f-68d5d68c19a2,},Annotations:map[string]string{io.kubernetes.container.hash: 7ad102a3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c93eab29614ac567f68364e03dc0ac7d0da682b3cea65c8942687ed8d4b7b0,PodSandboxId:607376c1fc0a82aa086fde416a8f8bf51ce69b6855bc69f8fa8cef7d2f0892d5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17200
47019009017885,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-fwg4s,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d1102c91-2165-4a2c-adbf-945b4db26c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 17ccca4b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e7e7a13c49ee7980c64535894e8860a69e90361df2e27755455b4db34c89e1,PodSandboxId:4b3997824769e5d86ad97bc0ba7d23a2fc5a847667d6bba4917ef0ab5692cc9d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720046955995236518,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qv65x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c1c74d-f40a-4283-8091-ecace04f1283,},Annotations:map[string]string{io.kubernetes.container.hash: 64704d8e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458,PodSandboxId:81dad2b1935bcd68efa151335245b03cd60ca40fce6ee4c03b2f2d5f06d40c3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720046927137249061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e06fda-a0cf-4385-8cc1-55d7f00dbd4b,},Annotations:map[string]string{io.kubernetes.container.hash: 52766e41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8,PodSandboxId:344665b6c1ed8f298720baef2b2a7313d220512e7fcc393ede49ab3602639119,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720046924035087254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lgcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e61e787d-1169-403e-b844-fc0bbd9acd53,},Annotations:map[string]string{io.kubernetes.container.hash: 18abb899,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98,PodSan
dboxId:84559595da5fb80be845f50fafb25835ee132378011c2520c7e8a61f40e7fa5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720046920041872821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b82480-c0ed-4129-b570-a2f3d3a64d9e,},Annotations:map[string]string{io.kubernetes.container.hash: f7dd78a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986,PodSandboxId:073d359ecbe7d6dccf2362b759
0bcb46eb1b5caa557655c6ca67a1202e01b0a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720046899611606959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff0448fe42247eb979c2fd89936b6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf6cf3c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a,PodSandboxId:617b4d9325127ac353323f84d3444066b474ac032653d1141296db42c1ba2047,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720046899579190091,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090c0fd627e1381212e5d65203a04f22,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc,PodSandboxId:f49b94871cf9d37f57cf6e13e98e2f9e1ff7dcc83e90aba44aecc395026fc43b,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720046899613531948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de10e56abb835b85e60ca6ab00f4f6f6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd,PodSandboxId:dc4e249bcf07666e38bd4608c9ba90a1d80b56bfc980e5ccdae2fd57e2f58c36,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720046899568725893,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-224553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37d97992a7cb908d598d3286e8564ec,},Annotations:map[string]string{io.kubernetes.container.hash: db5057a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d95508c-7420-4ef8-9dad-aac5f2736ae2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f52235fa0a961       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   68614950f41c5       hello-world-app-86c47465fc-bp4c7
	a22a23f9b9158       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   41c255833c60d       headlamp-7867546754-jgcbc
	ad572c678c9cf       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   e1acd9efaa593       nginx
	0145cba5261b4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   abecec9194946       gcp-auth-5db96cd9b4-r8pwn
	04c93eab29614       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   607376c1fc0a8       yakd-dashboard-799879c74f-fwg4s
	82e7e7a13c49e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   4b3997824769e       metrics-server-c59844bb4-qv65x
	89f8ea56161fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   81dad2b1935bc       storage-provisioner
	9c8f870aa5bc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   344665b6c1ed8       coredns-7db6d8ff4d-4lgcj
	b081398a9d47a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        8 minutes ago       Running             kube-proxy                0                   84559595da5fb       kube-proxy-ll2cf
	aaf27a803ff5d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        8 minutes ago       Running             kube-controller-manager   0                   f49b94871cf9d       kube-controller-manager-addons-224553
	4f21a3a13f52a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   073d359ecbe7d       etcd-addons-224553
	ca14b1cc58451       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        8 minutes ago       Running             kube-scheduler            0                   617b4d9325127       kube-scheduler-addons-224553
	e547072b66d6f       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        8 minutes ago       Running             kube-apiserver            0                   dc4e249bcf076       kube-apiserver-addons-224553
	
	
	==> coredns [9c8f870aa5bc4904cd115c4766eda98712d4c45f1cf673912c223da418a4e5d8] <==
	[INFO] 10.244.0.7:49913 - 42337 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000234321s
	[INFO] 10.244.0.7:54572 - 7516 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147321s
	[INFO] 10.244.0.7:54572 - 20305 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123031s
	[INFO] 10.244.0.7:44451 - 39667 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000367579s
	[INFO] 10.244.0.7:44451 - 65526 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000423709s
	[INFO] 10.244.0.7:34244 - 36742 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000190089s
	[INFO] 10.244.0.7:34244 - 63616 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000426108s
	[INFO] 10.244.0.7:60229 - 32658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00013172s
	[INFO] 10.244.0.7:60229 - 4255 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144353s
	[INFO] 10.244.0.7:48420 - 13213 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051056s
	[INFO] 10.244.0.7:48420 - 23195 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044906s
	[INFO] 10.244.0.7:34551 - 33947 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058016s
	[INFO] 10.244.0.7:34551 - 43165 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076825s
	[INFO] 10.244.0.7:43797 - 4299 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056715s
	[INFO] 10.244.0.7:43797 - 30925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086942s
	[INFO] 10.244.0.22:37776 - 32030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000443647s
	[INFO] 10.244.0.22:48999 - 16478 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0023847s
	[INFO] 10.244.0.22:40808 - 37514 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017725s
	[INFO] 10.244.0.22:44420 - 21317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135901s
	[INFO] 10.244.0.22:56411 - 8575 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118728s
	[INFO] 10.244.0.22:51551 - 65329 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058754s
	[INFO] 10.244.0.22:37559 - 22003 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001961516s
	[INFO] 10.244.0.22:41883 - 32017 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002113422s
	[INFO] 10.244.0.26:55107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000362618s
	[INFO] 10.244.0.26:40518 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149917s
	
	
	==> describe nodes <==
	Name:               addons-224553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-224553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=addons-224553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T22_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-224553
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 22:48:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-224553
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 22:56:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 22:54:34 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 22:54:34 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 22:54:34 +0000   Wed, 03 Jul 2024 22:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 22:54:34 +0000   Wed, 03 Jul 2024 22:48:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    addons-224553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b4c35a1e2054f838c54e7ae0a0c423a
	  System UUID:                6b4c35a1-e205-4f83-8c54-e7ae0a0c423a
	  Boot ID:                    9c5f331b-d918-4ede-b228-99b4a7bc0ad8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-bp4c7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  gcp-auth                    gcp-auth-5db96cd9b4-r8pwn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  headlamp                    headlamp-7867546754-jgcbc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 coredns-7db6d8ff4d-4lgcj                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m25s
	  kube-system                 etcd-addons-224553                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-apiserver-addons-224553             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-controller-manager-addons-224553    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-proxy-ll2cf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-scheduler-addons-224553             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  yakd-dashboard              yakd-dashboard-799879c74f-fwg4s          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m24s  kube-proxy       
	  Normal  Starting                 8m39s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m39s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m39s  kubelet          Node addons-224553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s  kubelet          Node addons-224553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s  kubelet          Node addons-224553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m38s  kubelet          Node addons-224553 status is now: NodeReady
	  Normal  RegisteredNode           8m25s  node-controller  Node addons-224553 event: Registered Node addons-224553 in Controller
	
	
	==> dmesg <==
	[  +0.062479] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +5.157176] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.390538] kauditd_printk_skb: 143 callbacks suppressed
	[  +9.350073] kauditd_printk_skb: 78 callbacks suppressed
	[Jul 3 22:50] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.884276] kauditd_printk_skb: 30 callbacks suppressed
	[ +18.515783] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.847190] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.085747] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.007967] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 3 22:51] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.188809] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.212263] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.158047] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.059286] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.094660] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.006485] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.235463] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.634146] kauditd_printk_skb: 8 callbacks suppressed
	[Jul 3 22:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.790821] kauditd_printk_skb: 2 callbacks suppressed
	[Jul 3 22:53] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.496823] kauditd_printk_skb: 33 callbacks suppressed
	[Jul 3 22:54] kauditd_printk_skb: 6 callbacks suppressed
	[Jul 3 22:57] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [4f21a3a13f52a7e93571d8a9a075625a25a697fbe40c29683db366c04b701986] <==
	{"level":"warn","ts":"2024-07-03T22:51:12.404121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.596267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-03T22:51:12.405514Z","caller":"traceutil/trace.go:171","msg":"trace[1022240753] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1225; }","duration":"302.018515ms","start":"2024-07-03T22:51:12.103485Z","end":"2024-07-03T22:51:12.405504Z","steps":["trace[1022240753] 'range keys from in-memory index tree'  (duration: 300.549807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:51:12.40554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:51:12.103473Z","time spent":"302.058721ms","remote":"127.0.0.1:48534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-03T22:51:12.405886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"383.992825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-03T22:51:12.405989Z","caller":"traceutil/trace.go:171","msg":"trace[1178731318] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1225; }","duration":"384.118289ms","start":"2024-07-03T22:51:12.021859Z","end":"2024-07-03T22:51:12.405977Z","steps":["trace[1178731318] 'range keys from in-memory index tree'  (duration: 383.83316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:51:12.406021Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:51:12.02184Z","time spent":"384.173292ms","remote":"127.0.0.1:51216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-03T22:51:43.450919Z","caller":"traceutil/trace.go:171","msg":"trace[1600560405] transaction","detail":"{read_only:false; response_revision:1484; number_of_response:1; }","duration":"206.785947ms","start":"2024-07-03T22:51:43.244116Z","end":"2024-07-03T22:51:43.450902Z","steps":["trace[1600560405] 'process raft request'  (duration: 206.66574ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:43.451285Z","caller":"traceutil/trace.go:171","msg":"trace[1120301592] linearizableReadLoop","detail":"{readStateIndex:1544; appliedIndex:1544; }","duration":"107.117578ms","start":"2024-07-03T22:51:43.344154Z","end":"2024-07-03T22:51:43.451272Z","steps":["trace[1120301592] 'read index received'  (duration: 107.114704ms)","trace[1120301592] 'applied index is now lower than readState.Index'  (duration: 2.248µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T22:51:43.451526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.345068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-03T22:51:43.451571Z","caller":"traceutil/trace.go:171","msg":"trace[628866255] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1484; }","duration":"107.433621ms","start":"2024-07-03T22:51:43.344128Z","end":"2024-07-03T22:51:43.451561Z","steps":["trace[628866255] 'agreement among raft nodes before linearized reading'  (duration: 107.303572ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:53.72846Z","caller":"traceutil/trace.go:171","msg":"trace[1921893865] linearizableReadLoop","detail":"{readStateIndex:1611; appliedIndex:1610; }","duration":"149.291025ms","start":"2024-07-03T22:51:53.579144Z","end":"2024-07-03T22:51:53.728435Z","steps":["trace[1921893865] 'read index received'  (duration: 149.063003ms)","trace[1921893865] 'applied index is now lower than readState.Index'  (duration: 227.36µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T22:51:53.728646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.469237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-03T22:51:53.728677Z","caller":"traceutil/trace.go:171","msg":"trace[36344324] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1549; }","duration":"149.543312ms","start":"2024-07-03T22:51:53.579119Z","end":"2024-07-03T22:51:53.728663Z","steps":["trace[36344324] 'agreement among raft nodes before linearized reading'  (duration: 149.433415ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:51:53.728908Z","caller":"traceutil/trace.go:171","msg":"trace[1328688854] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"228.251489ms","start":"2024-07-03T22:51:53.50065Z","end":"2024-07-03T22:51:53.728901Z","steps":["trace[1328688854] 'process raft request'  (duration: 227.600807ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:52:24.068616Z","caller":"traceutil/trace.go:171","msg":"trace[968524882] linearizableReadLoop","detail":"{readStateIndex:1709; appliedIndex:1708; }","duration":"100.99671ms","start":"2024-07-03T22:52:23.967577Z","end":"2024-07-03T22:52:24.068574Z","steps":["trace[968524882] 'read index received'  (duration: 100.903086ms)","trace[968524882] 'applied index is now lower than readState.Index'  (duration: 93.119µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T22:52:24.068842Z","caller":"traceutil/trace.go:171","msg":"trace[1750869291] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"275.978375ms","start":"2024-07-03T22:52:23.792848Z","end":"2024-07-03T22:52:24.068827Z","steps":["trace[1750869291] 'process raft request'  (duration: 275.615725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:24.068883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.26031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-03T22:52:24.06891Z","caller":"traceutil/trace.go:171","msg":"trace[12657442] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1640; }","duration":"101.394159ms","start":"2024-07-03T22:52:23.967507Z","end":"2024-07-03T22:52:24.068902Z","steps":["trace[12657442] 'agreement among raft nodes before linearized reading'  (duration: 101.218394ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T22:52:56.928757Z","caller":"traceutil/trace.go:171","msg":"trace[503711177] linearizableReadLoop","detail":"{readStateIndex:1814; appliedIndex:1813; }","duration":"215.833758ms","start":"2024-07-03T22:52:56.712909Z","end":"2024-07-03T22:52:56.928743Z","steps":["trace[503711177] 'read index received'  (duration: 215.686327ms)","trace[503711177] 'applied index is now lower than readState.Index'  (duration: 146.927µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T22:52:56.928932Z","caller":"traceutil/trace.go:171","msg":"trace[1054765434] transaction","detail":"{read_only:false; response_revision:1736; number_of_response:1; }","duration":"344.218674ms","start":"2024-07-03T22:52:56.584705Z","end":"2024-07-03T22:52:56.928924Z","steps":["trace[1054765434] 'process raft request'  (duration: 343.927855ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:56.929069Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T22:52:56.584687Z","time spent":"344.28785ms","remote":"127.0.0.1:51194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1734 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-03T22:52:56.929137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.264717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-07-03T22:52:56.929185Z","caller":"traceutil/trace.go:171","msg":"trace[797791451] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1736; }","duration":"216.343962ms","start":"2024-07-03T22:52:56.712832Z","end":"2024-07-03T22:52:56.929176Z","steps":["trace[797791451] 'agreement among raft nodes before linearized reading'  (duration: 216.276552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T22:52:56.929072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.101657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6126"}
	{"level":"info","ts":"2024-07-03T22:52:56.929414Z","caller":"traceutil/trace.go:171","msg":"trace[207312996] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1736; }","duration":"169.467353ms","start":"2024-07-03T22:52:56.759937Z","end":"2024-07-03T22:52:56.929404Z","steps":["trace[207312996] 'agreement among raft nodes before linearized reading'  (duration: 169.051631ms)"],"step_count":1}
	
	
	==> gcp-auth [0145cba5261b4869d37c39b2c25416226a5534393a271fa297ce3bcdfbccc26d] <==
	2024/07/03 22:51:16 GCP Auth Webhook started!
	2024/07/03 22:51:17 Ready to marshal response ...
	2024/07/03 22:51:17 Ready to write response ...
	2024/07/03 22:51:17 Ready to marshal response ...
	2024/07/03 22:51:17 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:28 Ready to marshal response ...
	2024/07/03 22:51:28 Ready to write response ...
	2024/07/03 22:51:38 Ready to marshal response ...
	2024/07/03 22:51:38 Ready to write response ...
	2024/07/03 22:51:47 Ready to marshal response ...
	2024/07/03 22:51:47 Ready to write response ...
	2024/07/03 22:51:48 Ready to marshal response ...
	2024/07/03 22:51:48 Ready to write response ...
	2024/07/03 22:51:48 Ready to marshal response ...
	2024/07/03 22:51:48 Ready to write response ...
	2024/07/03 22:52:18 Ready to marshal response ...
	2024/07/03 22:52:18 Ready to write response ...
	2024/07/03 22:52:51 Ready to marshal response ...
	2024/07/03 22:52:51 Ready to write response ...
	2024/07/03 22:54:02 Ready to marshal response ...
	2024/07/03 22:54:02 Ready to write response ...
	
	
	==> kernel <==
	 22:57:05 up 9 min,  0 users,  load average: 0.41, 0.70, 0.49
	Linux addons-224553 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e547072b66d6fc4522e2e5fa8becbc6e433abf55fa24bda3b382bc5177e5b0fd] <==
	I0703 22:51:09.872367       1 trace.go:236] Trace[909029693]: "Update" accept:application/json, */*,audit-id:dbcdf4e7-380b-4ea5-803f-a6c49bed38a6,client:192.168.39.226,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (03-Jul-2024 22:51:09.364) (total time: 508ms):
	Trace[909029693]: ["GuaranteedUpdate etcd3" audit-id:dbcdf4e7-380b-4ea5-803f-a6c49bed38a6,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 507ms (22:51:09.364)
	Trace[909029693]:  ---"Txn call completed" 506ms (22:51:09.872)]
	Trace[909029693]: [508.186357ms] [508.186357ms] END
	I0703 22:51:38.379257       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0703 22:51:38.579647       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.146.183"}
	I0703 22:51:41.880458       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0703 22:51:42.974487       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0703 22:51:44.897202       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0703 22:51:47.969178       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.34.150"}
	I0703 22:52:34.038883       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0703 22:53:09.139676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.139806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.170282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.170463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.194099       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.194370       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.198844       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.198901       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0703 22:53:09.234585       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0703 22:53:09.234635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0703 22:53:10.199506       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0703 22:53:10.235667       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0703 22:53:10.244531       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0703 22:54:02.736525       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.0.197"}
	
	
	==> kube-controller-manager [aaf27a803ff5de83c73b0ac3660cf2083c0c93075c740aab4ba9a18398a964dc] <==
	W0703 22:55:08.762517       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:55:08.762567       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:55:11.223738       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:55:11.223791       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:55:20.711064       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:55:20.711177       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:55:42.176051       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:55:42.176112       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:55:53.688367       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:55:53.688476       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:01.838982       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:01.839063       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:10.162291       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:10.162434       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:21.690868       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:21.690990       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:25.800130       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:25.800228       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:58.456610       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:58.456644       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:56:59.546068       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:56:59.546176       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0703 22:57:02.230126       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0703 22:57:02.230184       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0703 22:57:03.409484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="115.283µs"
	
	
	==> kube-proxy [b081398a9d47a2804741f3099f8236500c6075f63a748aaeadd18e8298e38b98] <==
	I0703 22:48:40.636097       1 server_linux.go:69] "Using iptables proxy"
	I0703 22:48:40.667869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.226"]
	I0703 22:48:40.758169       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 22:48:40.758206       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 22:48:40.758220       1 server_linux.go:165] "Using iptables Proxier"
	I0703 22:48:40.764443       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 22:48:40.764664       1 server.go:872] "Version info" version="v1.30.2"
	I0703 22:48:40.764683       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 22:48:40.767686       1 config.go:192] "Starting service config controller"
	I0703 22:48:40.767736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 22:48:40.767767       1 config.go:101] "Starting endpoint slice config controller"
	I0703 22:48:40.767770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 22:48:40.770878       1 config.go:319] "Starting node config controller"
	I0703 22:48:40.770887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 22:48:40.868065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 22:48:40.868110       1 shared_informer.go:320] Caches are synced for service config
	I0703 22:48:40.871123       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ca14b1cc58451fb74866c76fdb89f43cf9360c1c570365d44399bec4db07946a] <==
	W0703 22:48:22.642539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0703 22:48:22.643592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0703 22:48:22.642807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 22:48:22.643659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 22:48:22.642923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 22:48:22.643342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:22.643467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 22:48:23.622098       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 22:48:23.622198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 22:48:23.797875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:23.797932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 22:48:23.804137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 22:48:23.804207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0703 22:48:23.827032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 22:48:23.827082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 22:48:23.908069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 22:48:23.908115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 22:48:23.938356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 22:48:23.938474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 22:48:23.943264       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 22:48:23.943981       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0703 22:48:26.733461       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 03 22:54:09 addons-224553 kubelet[1274]: I0703 22:54:09.265499    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6034c1ae-111a-424f-b9df-4e5c4d7e133c" path="/var/lib/kubelet/pods/6034c1ae-111a-424f-b9df-4e5c4d7e133c/volumes"
	Jul 03 22:54:25 addons-224553 kubelet[1274]: E0703 22:54:25.333436    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 22:54:25 addons-224553 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 22:54:25 addons-224553 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 22:54:25 addons-224553 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 22:54:25 addons-224553 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 22:54:26 addons-224553 kubelet[1274]: I0703 22:54:26.371283    1274 scope.go:117] "RemoveContainer" containerID="767852826cab87e7a3c4a29c10a8cd1ddf6af9e537585131711a4748b8dd911b"
	Jul 03 22:54:26 addons-224553 kubelet[1274]: I0703 22:54:26.392587    1274 scope.go:117] "RemoveContainer" containerID="9faea61d4c984c36016a00c40a1509c309a4a453fb58ca1325ec5573b5545738"
	Jul 03 22:55:25 addons-224553 kubelet[1274]: E0703 22:55:25.331416    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 22:55:25 addons-224553 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 22:55:25 addons-224553 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 22:55:25 addons-224553 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 22:55:25 addons-224553 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 22:56:25 addons-224553 kubelet[1274]: E0703 22:56:25.332877    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 22:56:25 addons-224553 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 22:56:25 addons-224553 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 22:56:25 addons-224553 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 22:56:25 addons-224553 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 22:57:03 addons-224553 kubelet[1274]: I0703 22:57:03.431676    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-bp4c7" podStartSLOduration=178.236464582 podStartE2EDuration="3m1.431639976s" podCreationTimestamp="2024-07-03 22:54:02 +0000 UTC" firstStartedPulling="2024-07-03 22:54:03.117451328 +0000 UTC m=+338.072742032" lastFinishedPulling="2024-07-03 22:54:06.312626722 +0000 UTC m=+341.267917426" observedRunningTime="2024-07-03 22:54:06.839412197 +0000 UTC m=+341.794702917" watchObservedRunningTime="2024-07-03 22:57:03.431639976 +0000 UTC m=+518.386930693"
	Jul 03 22:57:04 addons-224553 kubelet[1274]: I0703 22:57:04.967116    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5l9d\" (UniqueName: \"kubernetes.io/projected/78c1c74d-f40a-4283-8091-ecace04f1283-kube-api-access-w5l9d\") pod \"78c1c74d-f40a-4283-8091-ecace04f1283\" (UID: \"78c1c74d-f40a-4283-8091-ecace04f1283\") "
	Jul 03 22:57:04 addons-224553 kubelet[1274]: I0703 22:57:04.967197    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/78c1c74d-f40a-4283-8091-ecace04f1283-tmp-dir\") pod \"78c1c74d-f40a-4283-8091-ecace04f1283\" (UID: \"78c1c74d-f40a-4283-8091-ecace04f1283\") "
	Jul 03 22:57:04 addons-224553 kubelet[1274]: I0703 22:57:04.967662    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c1c74d-f40a-4283-8091-ecace04f1283-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "78c1c74d-f40a-4283-8091-ecace04f1283" (UID: "78c1c74d-f40a-4283-8091-ecace04f1283"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 03 22:57:04 addons-224553 kubelet[1274]: I0703 22:57:04.979663    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c1c74d-f40a-4283-8091-ecace04f1283-kube-api-access-w5l9d" (OuterVolumeSpecName: "kube-api-access-w5l9d") pod "78c1c74d-f40a-4283-8091-ecace04f1283" (UID: "78c1c74d-f40a-4283-8091-ecace04f1283"). InnerVolumeSpecName "kube-api-access-w5l9d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 03 22:57:05 addons-224553 kubelet[1274]: I0703 22:57:05.068431    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w5l9d\" (UniqueName: \"kubernetes.io/projected/78c1c74d-f40a-4283-8091-ecace04f1283-kube-api-access-w5l9d\") on node \"addons-224553\" DevicePath \"\""
	Jul 03 22:57:05 addons-224553 kubelet[1274]: I0703 22:57:05.068484    1274 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/78c1c74d-f40a-4283-8091-ecace04f1283-tmp-dir\") on node \"addons-224553\" DevicePath \"\""
	
	
	==> storage-provisioner [89f8ea56161fc3f970558a77cb63a439a2d1a32385b056be0515c7eeed96f458] <==
	I0703 22:48:48.950151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 22:48:49.035195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 22:48:49.037525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0703 22:48:49.060955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0703 22:48:49.069240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40cae642-3d03-41f1-8256-6b1ca176ed1d", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26 became leader
	I0703 22:48:49.070883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26!
	I0703 22:48:49.171578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-224553_4a02f5d1-94c9-498e-8e4e-466734425f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-224553 -n addons-224553
helpers_test.go:261: (dbg) Run:  kubectl --context addons-224553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (342.31s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-224553
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-224553: exit status 82 (2m0.472398719s)

                                                
                                                
-- stdout --
	* Stopping node "addons-224553"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-224553" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-224553
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-224553: exit status 11 (21.542272887s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-224553" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-224553
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-224553: exit status 11 (6.144203388s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-224553" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-224553
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-224553: exit status 11 (6.142971592s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-224553" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 node stop m02 -v=7 --alsologtostderr
E0703 23:09:38.318943   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:10:19.279231   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:11:17.046840   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-856893 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.478104537s)

                                                
                                                
-- stdout --
	* Stopping node "ha-856893-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:09:35.617813   31349 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:09:35.618160   31349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:09:35.618181   31349 out.go:304] Setting ErrFile to fd 2...
	I0703 23:09:35.618188   31349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:09:35.618417   31349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:09:35.618663   31349 mustload.go:65] Loading cluster: ha-856893
	I0703 23:09:35.619016   31349 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:09:35.619030   31349 stop.go:39] StopHost: ha-856893-m02
	I0703 23:09:35.619373   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:09:35.619426   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:09:35.636251   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
	I0703 23:09:35.636830   31349 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:09:35.637516   31349 main.go:141] libmachine: Using API Version  1
	I0703 23:09:35.637542   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:09:35.638004   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:09:35.640412   31349 out.go:177] * Stopping node "ha-856893-m02"  ...
	I0703 23:09:35.641854   31349 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0703 23:09:35.641904   31349 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:09:35.642187   31349 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0703 23:09:35.642222   31349 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:09:35.645206   31349 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:09:35.645502   31349 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:09:35.645539   31349 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:09:35.645741   31349 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:09:35.645941   31349 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:09:35.646072   31349 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:09:35.646281   31349 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:09:35.735209   31349 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0703 23:09:35.790188   31349 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0703 23:09:35.846713   31349 main.go:141] libmachine: Stopping "ha-856893-m02"...
	I0703 23:09:35.846740   31349 main.go:141] libmachine: (ha-856893-m02) Calling .GetState
	I0703 23:09:35.848093   31349 main.go:141] libmachine: (ha-856893-m02) Calling .Stop
	I0703 23:09:35.851668   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 0/120
	I0703 23:09:36.853143   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 1/120
	I0703 23:09:37.855309   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 2/120
	I0703 23:09:38.856697   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 3/120
	I0703 23:09:39.858571   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 4/120
	I0703 23:09:40.860426   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 5/120
	I0703 23:09:41.861787   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 6/120
	I0703 23:09:42.863326   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 7/120
	I0703 23:09:43.864583   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 8/120
	I0703 23:09:44.865951   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 9/120
	I0703 23:09:45.867952   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 10/120
	I0703 23:09:46.869339   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 11/120
	I0703 23:09:47.870818   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 12/120
	I0703 23:09:48.872281   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 13/120
	I0703 23:09:49.874412   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 14/120
	I0703 23:09:50.876385   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 15/120
	I0703 23:09:51.877847   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 16/120
	I0703 23:09:52.879729   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 17/120
	I0703 23:09:53.881323   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 18/120
	I0703 23:09:54.882674   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 19/120
	I0703 23:09:55.884860   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 20/120
	I0703 23:09:56.886287   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 21/120
	I0703 23:09:57.887631   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 22/120
	I0703 23:09:58.888930   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 23/120
	I0703 23:09:59.890571   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 24/120
	I0703 23:10:00.892881   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 25/120
	I0703 23:10:01.894358   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 26/120
	I0703 23:10:02.895848   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 27/120
	I0703 23:10:03.897914   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 28/120
	I0703 23:10:04.899415   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 29/120
	I0703 23:10:05.901712   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 30/120
	I0703 23:10:06.903407   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 31/120
	I0703 23:10:07.904888   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 32/120
	I0703 23:10:08.906267   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 33/120
	I0703 23:10:09.908369   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 34/120
	I0703 23:10:10.910219   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 35/120
	I0703 23:10:11.911481   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 36/120
	I0703 23:10:12.913698   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 37/120
	I0703 23:10:13.915494   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 38/120
	I0703 23:10:14.917443   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 39/120
	I0703 23:10:15.919472   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 40/120
	I0703 23:10:16.921453   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 41/120
	I0703 23:10:17.923010   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 42/120
	I0703 23:10:18.924673   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 43/120
	I0703 23:10:19.926328   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 44/120
	I0703 23:10:20.928172   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 45/120
	I0703 23:10:21.930446   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 46/120
	I0703 23:10:22.931930   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 47/120
	I0703 23:10:23.934129   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 48/120
	I0703 23:10:24.935530   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 49/120
	I0703 23:10:25.937556   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 50/120
	I0703 23:10:26.938958   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 51/120
	I0703 23:10:27.940434   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 52/120
	I0703 23:10:28.941788   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 53/120
	I0703 23:10:29.943224   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 54/120
	I0703 23:10:30.944636   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 55/120
	I0703 23:10:31.946684   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 56/120
	I0703 23:10:32.948000   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 57/120
	I0703 23:10:33.949306   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 58/120
	I0703 23:10:34.950646   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 59/120
	I0703 23:10:35.952638   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 60/120
	I0703 23:10:36.954804   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 61/120
	I0703 23:10:37.956234   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 62/120
	I0703 23:10:38.958342   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 63/120
	I0703 23:10:39.959961   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 64/120
	I0703 23:10:40.961514   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 65/120
	I0703 23:10:41.963147   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 66/120
	I0703 23:10:42.964556   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 67/120
	I0703 23:10:43.966484   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 68/120
	I0703 23:10:44.967992   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 69/120
	I0703 23:10:45.970196   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 70/120
	I0703 23:10:46.972214   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 71/120
	I0703 23:10:47.973559   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 72/120
	I0703 23:10:48.974939   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 73/120
	I0703 23:10:49.976839   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 74/120
	I0703 23:10:50.978829   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 75/120
	I0703 23:10:51.980736   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 76/120
	I0703 23:10:52.983227   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 77/120
	I0703 23:10:53.984780   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 78/120
	I0703 23:10:54.986332   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 79/120
	I0703 23:10:55.988743   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 80/120
	I0703 23:10:56.990105   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 81/120
	I0703 23:10:57.991427   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 82/120
	I0703 23:10:58.992827   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 83/120
	I0703 23:10:59.994669   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 84/120
	I0703 23:11:00.996263   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 85/120
	I0703 23:11:01.998283   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 86/120
	I0703 23:11:02.999815   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 87/120
	I0703 23:11:04.001036   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 88/120
	I0703 23:11:05.002480   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 89/120
	I0703 23:11:06.004651   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 90/120
	I0703 23:11:07.006425   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 91/120
	I0703 23:11:08.007906   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 92/120
	I0703 23:11:09.009368   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 93/120
	I0703 23:11:10.010928   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 94/120
	I0703 23:11:11.012796   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 95/120
	I0703 23:11:12.014331   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 96/120
	I0703 23:11:13.015671   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 97/120
	I0703 23:11:14.017050   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 98/120
	I0703 23:11:15.018551   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 99/120
	I0703 23:11:16.020854   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 100/120
	I0703 23:11:17.022422   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 101/120
	I0703 23:11:18.024619   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 102/120
	I0703 23:11:19.026032   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 103/120
	I0703 23:11:20.027846   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 104/120
	I0703 23:11:21.029713   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 105/120
	I0703 23:11:22.031139   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 106/120
	I0703 23:11:23.032553   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 107/120
	I0703 23:11:24.034658   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 108/120
	I0703 23:11:25.035957   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 109/120
	I0703 23:11:26.037525   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 110/120
	I0703 23:11:27.038845   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 111/120
	I0703 23:11:28.040428   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 112/120
	I0703 23:11:29.042538   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 113/120
	I0703 23:11:30.043924   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 114/120
	I0703 23:11:31.045826   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 115/120
	I0703 23:11:32.047037   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 116/120
	I0703 23:11:33.049039   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 117/120
	I0703 23:11:34.050595   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 118/120
	I0703 23:11:35.052011   31349 main.go:141] libmachine: (ha-856893-m02) Waiting for machine to stop 119/120
	I0703 23:11:36.052448   31349 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0703 23:11:36.052580   31349 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-856893 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
E0703 23:11:41.200120   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr: (18.763476859s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-856893 -n ha-856893
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 logs -n 25: (1.565699642s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m03_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m04 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp testdata/cp-test.txt                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m03 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-856893 node stop m02 -v=7                                                    | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:04:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:04:49.303938   27242 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:04:49.304205   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304217   27242 out.go:304] Setting ErrFile to fd 2...
	I0703 23:04:49.304221   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304418   27242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:04:49.304993   27242 out.go:298] Setting JSON to false
	I0703 23:04:49.305930   27242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2829,"bootTime":1720045060,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:04:49.305987   27242 start.go:139] virtualization: kvm guest
	I0703 23:04:49.308231   27242 out.go:177] * [ha-856893] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:04:49.309607   27242 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:04:49.309635   27242 notify.go:220] Checking for updates...
	I0703 23:04:49.312119   27242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:04:49.313313   27242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:04:49.314518   27242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.315705   27242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:04:49.316858   27242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:04:49.318260   27242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:04:49.353555   27242 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:04:49.354873   27242 start.go:297] selected driver: kvm2
	I0703 23:04:49.354888   27242 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:04:49.354902   27242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:04:49.355866   27242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.355965   27242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:04:49.371321   27242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:04:49.371369   27242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:04:49.371558   27242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:04:49.371586   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:04:49.371590   27242 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0703 23:04:49.371596   27242 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 23:04:49.371647   27242 start.go:340] cluster config:
	{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0703 23:04:49.371752   27242 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.373469   27242 out.go:177] * Starting "ha-856893" primary control-plane node in "ha-856893" cluster
	I0703 23:04:49.374783   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:04:49.374822   27242 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:04:49.374831   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:04:49.374914   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:04:49.374925   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:04:49.375209   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:04:49.375227   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json: {Name:mkf45f45e81b9e1937bda66f4e2b577ad75b58d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:04:49.375355   27242 start.go:360] acquireMachinesLock for ha-856893: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:04:49.375381   27242 start.go:364] duration metric: took 13.613µs to acquireMachinesLock for "ha-856893"
	I0703 23:04:49.375397   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:04:49.375447   27242 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:04:49.377146   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:04:49.377284   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:49.377347   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:49.391658   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0703 23:04:49.392204   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:49.392806   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:04:49.392829   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:49.393132   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:49.393315   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:04:49.393456   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:04:49.393665   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:04:49.393703   27242 client.go:168] LocalClient.Create starting
	I0703 23:04:49.393738   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:04:49.393776   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393790   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393832   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:04:49.393849   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393861   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393879   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:04:49.393887   27242 main.go:141] libmachine: (ha-856893) Calling .PreCreateCheck
	I0703 23:04:49.394261   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:04:49.394643   27242 main.go:141] libmachine: Creating machine...
	I0703 23:04:49.394655   27242 main.go:141] libmachine: (ha-856893) Calling .Create
	I0703 23:04:49.394757   27242 main.go:141] libmachine: (ha-856893) Creating KVM machine...
	I0703 23:04:49.395897   27242 main.go:141] libmachine: (ha-856893) DBG | found existing default KVM network
	I0703 23:04:49.396588   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.396439   27265 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0703 23:04:49.396611   27242 main.go:141] libmachine: (ha-856893) DBG | created network xml: 
	I0703 23:04:49.396624   27242 main.go:141] libmachine: (ha-856893) DBG | <network>
	I0703 23:04:49.396638   27242 main.go:141] libmachine: (ha-856893) DBG |   <name>mk-ha-856893</name>
	I0703 23:04:49.396648   27242 main.go:141] libmachine: (ha-856893) DBG |   <dns enable='no'/>
	I0703 23:04:49.396658   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396672   27242 main.go:141] libmachine: (ha-856893) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 23:04:49.396682   27242 main.go:141] libmachine: (ha-856893) DBG |     <dhcp>
	I0703 23:04:49.396695   27242 main.go:141] libmachine: (ha-856893) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 23:04:49.396705   27242 main.go:141] libmachine: (ha-856893) DBG |     </dhcp>
	I0703 23:04:49.396713   27242 main.go:141] libmachine: (ha-856893) DBG |   </ip>
	I0703 23:04:49.396722   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396747   27242 main.go:141] libmachine: (ha-856893) DBG | </network>
	I0703 23:04:49.396767   27242 main.go:141] libmachine: (ha-856893) DBG | 
	I0703 23:04:49.401937   27242 main.go:141] libmachine: (ha-856893) DBG | trying to create private KVM network mk-ha-856893 192.168.39.0/24...
	I0703 23:04:49.466045   27242 main.go:141] libmachine: (ha-856893) DBG | private KVM network mk-ha-856893 192.168.39.0/24 created
	I0703 23:04:49.466078   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.465979   27265 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.466090   27242 main.go:141] libmachine: (ha-856893) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.466112   27242 main.go:141] libmachine: (ha-856893) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:04:49.466139   27242 main.go:141] libmachine: (ha-856893) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:04:49.697240   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.697136   27265 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa...
	I0703 23:04:49.882712   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882599   27265 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk...
	I0703 23:04:49.882738   27242 main.go:141] libmachine: (ha-856893) DBG | Writing magic tar header
	I0703 23:04:49.882748   27242 main.go:141] libmachine: (ha-856893) DBG | Writing SSH key tar header
	I0703 23:04:49.882772   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882735   27265 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.882887   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893
	I0703 23:04:49.882920   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 (perms=drwx------)
	I0703 23:04:49.882933   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:04:49.882948   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.882958   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:04:49.882966   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:04:49.882975   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:04:49.882984   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:04:49.882994   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:04:49.882999   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home
	I0703 23:04:49.883009   27242 main.go:141] libmachine: (ha-856893) DBG | Skipping /home - not owner
	I0703 23:04:49.883025   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:04:49.883039   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:04:49.883051   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:04:49.883062   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:49.884190   27242 main.go:141] libmachine: (ha-856893) define libvirt domain using xml: 
	I0703 23:04:49.884219   27242 main.go:141] libmachine: (ha-856893) <domain type='kvm'>
	I0703 23:04:49.884229   27242 main.go:141] libmachine: (ha-856893)   <name>ha-856893</name>
	I0703 23:04:49.884242   27242 main.go:141] libmachine: (ha-856893)   <memory unit='MiB'>2200</memory>
	I0703 23:04:49.884251   27242 main.go:141] libmachine: (ha-856893)   <vcpu>2</vcpu>
	I0703 23:04:49.884257   27242 main.go:141] libmachine: (ha-856893)   <features>
	I0703 23:04:49.884266   27242 main.go:141] libmachine: (ha-856893)     <acpi/>
	I0703 23:04:49.884273   27242 main.go:141] libmachine: (ha-856893)     <apic/>
	I0703 23:04:49.884284   27242 main.go:141] libmachine: (ha-856893)     <pae/>
	I0703 23:04:49.884302   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884313   27242 main.go:141] libmachine: (ha-856893)   </features>
	I0703 23:04:49.884325   27242 main.go:141] libmachine: (ha-856893)   <cpu mode='host-passthrough'>
	I0703 23:04:49.884337   27242 main.go:141] libmachine: (ha-856893)   
	I0703 23:04:49.884343   27242 main.go:141] libmachine: (ha-856893)   </cpu>
	I0703 23:04:49.884354   27242 main.go:141] libmachine: (ha-856893)   <os>
	I0703 23:04:49.884364   27242 main.go:141] libmachine: (ha-856893)     <type>hvm</type>
	I0703 23:04:49.884374   27242 main.go:141] libmachine: (ha-856893)     <boot dev='cdrom'/>
	I0703 23:04:49.884383   27242 main.go:141] libmachine: (ha-856893)     <boot dev='hd'/>
	I0703 23:04:49.884394   27242 main.go:141] libmachine: (ha-856893)     <bootmenu enable='no'/>
	I0703 23:04:49.884406   27242 main.go:141] libmachine: (ha-856893)   </os>
	I0703 23:04:49.884433   27242 main.go:141] libmachine: (ha-856893)   <devices>
	I0703 23:04:49.884459   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='cdrom'>
	I0703 23:04:49.884478   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/boot2docker.iso'/>
	I0703 23:04:49.884490   27242 main.go:141] libmachine: (ha-856893)       <target dev='hdc' bus='scsi'/>
	I0703 23:04:49.884520   27242 main.go:141] libmachine: (ha-856893)       <readonly/>
	I0703 23:04:49.884539   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884550   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='disk'>
	I0703 23:04:49.884564   27242 main.go:141] libmachine: (ha-856893)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:04:49.884581   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk'/>
	I0703 23:04:49.884592   27242 main.go:141] libmachine: (ha-856893)       <target dev='hda' bus='virtio'/>
	I0703 23:04:49.884605   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884623   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884635   27242 main.go:141] libmachine: (ha-856893)       <source network='mk-ha-856893'/>
	I0703 23:04:49.884644   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884657   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884668   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884679   27242 main.go:141] libmachine: (ha-856893)       <source network='default'/>
	I0703 23:04:49.884694   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884705   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884715   27242 main.go:141] libmachine: (ha-856893)     <serial type='pty'>
	I0703 23:04:49.884736   27242 main.go:141] libmachine: (ha-856893)       <target port='0'/>
	I0703 23:04:49.884745   27242 main.go:141] libmachine: (ha-856893)     </serial>
	I0703 23:04:49.884761   27242 main.go:141] libmachine: (ha-856893)     <console type='pty'>
	I0703 23:04:49.884777   27242 main.go:141] libmachine: (ha-856893)       <target type='serial' port='0'/>
	I0703 23:04:49.884789   27242 main.go:141] libmachine: (ha-856893)     </console>
	I0703 23:04:49.884799   27242 main.go:141] libmachine: (ha-856893)     <rng model='virtio'>
	I0703 23:04:49.884810   27242 main.go:141] libmachine: (ha-856893)       <backend model='random'>/dev/random</backend>
	I0703 23:04:49.884819   27242 main.go:141] libmachine: (ha-856893)     </rng>
	I0703 23:04:49.884831   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884838   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884855   27242 main.go:141] libmachine: (ha-856893)   </devices>
	I0703 23:04:49.884874   27242 main.go:141] libmachine: (ha-856893) </domain>
	I0703 23:04:49.884887   27242 main.go:141] libmachine: (ha-856893) 
	I0703 23:04:49.889408   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:7f:ab:67 in network default
	I0703 23:04:49.890000   27242 main.go:141] libmachine: (ha-856893) Ensuring networks are active...
	I0703 23:04:49.890020   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:49.890827   27242 main.go:141] libmachine: (ha-856893) Ensuring network default is active
	I0703 23:04:49.891173   27242 main.go:141] libmachine: (ha-856893) Ensuring network mk-ha-856893 is active
	I0703 23:04:49.891707   27242 main.go:141] libmachine: (ha-856893) Getting domain xml...
	I0703 23:04:49.892417   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:51.076607   27242 main.go:141] libmachine: (ha-856893) Waiting to get IP...
	I0703 23:04:51.077509   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.077950   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.078001   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.077954   27265 retry.go:31] will retry after 279.728515ms: waiting for machine to come up
	I0703 23:04:51.359420   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.359916   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.359951   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.359884   27265 retry.go:31] will retry after 247.648785ms: waiting for machine to come up
	I0703 23:04:51.609238   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.609581   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.609605   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.609536   27265 retry.go:31] will retry after 462.632413ms: waiting for machine to come up
	I0703 23:04:52.074013   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.074458   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.074495   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.074436   27265 retry.go:31] will retry after 535.361005ms: waiting for machine to come up
	I0703 23:04:52.611006   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.611471   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.611499   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.611417   27265 retry.go:31] will retry after 566.856393ms: waiting for machine to come up
	I0703 23:04:53.180116   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:53.180549   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:53.180572   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:53.180514   27265 retry.go:31] will retry after 893.437933ms: waiting for machine to come up
	I0703 23:04:54.075051   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:54.075493   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:54.075541   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:54.075436   27265 retry.go:31] will retry after 1.153111216s: waiting for machine to come up
	I0703 23:04:55.229683   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:55.230080   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:55.230099   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:55.230058   27265 retry.go:31] will retry after 1.209590198s: waiting for machine to come up
	I0703 23:04:56.441430   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:56.441787   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:56.441815   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:56.441765   27265 retry.go:31] will retry after 1.140725525s: waiting for machine to come up
	I0703 23:04:57.583965   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:57.584360   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:57.584387   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:57.584309   27265 retry.go:31] will retry after 2.005681822s: waiting for machine to come up
	I0703 23:04:59.591365   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:59.591779   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:59.591807   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:59.591747   27265 retry.go:31] will retry after 2.709221348s: waiting for machine to come up
	I0703 23:05:02.304438   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:02.304759   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:02.304799   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:02.304723   27265 retry.go:31] will retry after 3.359635089s: waiting for machine to come up
	I0703 23:05:05.666017   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:05.666403   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:05.666432   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:05.666364   27265 retry.go:31] will retry after 3.83770662s: waiting for machine to come up
	I0703 23:05:09.505078   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505551   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505566   27242 main.go:141] libmachine: (ha-856893) Found IP for machine: 192.168.39.172
	I0703 23:05:09.505579   27242 main.go:141] libmachine: (ha-856893) Reserving static IP address...
	I0703 23:05:09.505883   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find host DHCP lease matching {name: "ha-856893", mac: "52:54:00:f8:43:23", ip: "192.168.39.172"} in network mk-ha-856893
	I0703 23:05:09.585944   27242 main.go:141] libmachine: (ha-856893) DBG | Getting to WaitForSSH function...
	I0703 23:05:09.585974   27242 main.go:141] libmachine: (ha-856893) Reserved static IP address: 192.168.39.172
	I0703 23:05:09.585992   27242 main.go:141] libmachine: (ha-856893) Waiting for SSH to be available...
	I0703 23:05:09.588555   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589004   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.589032   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589229   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH client type: external
	I0703 23:05:09.589251   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa (-rw-------)
	I0703 23:05:09.589277   27242 main.go:141] libmachine: (ha-856893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:05:09.589292   27242 main.go:141] libmachine: (ha-856893) DBG | About to run SSH command:
	I0703 23:05:09.589321   27242 main.go:141] libmachine: (ha-856893) DBG | exit 0
	I0703 23:05:09.716024   27242 main.go:141] libmachine: (ha-856893) DBG | SSH cmd err, output: <nil>: 
	I0703 23:05:09.716309   27242 main.go:141] libmachine: (ha-856893) KVM machine creation complete!
	I0703 23:05:09.716633   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:09.717150   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717368   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717544   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:05:09.717558   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:09.718761   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:05:09.718778   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:05:09.718786   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:05:09.718793   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.720891   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721227   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.721246   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721398   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.721581   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721736   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721884   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.722050   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.722255   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.722270   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:05:09.827380   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:09.827404   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:05:09.827412   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.830421   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830736   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.830762   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830957   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.831181   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831359   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831522   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.831674   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.831845   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.831858   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:05:09.940700   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:05:09.940805   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:05:09.940820   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:05:09.940836   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941067   27242 buildroot.go:166] provisioning hostname "ha-856893"
	I0703 23:05:09.941088   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941282   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.943686   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944069   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.944095   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944257   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.944455   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944603   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944740   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.944877   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.945060   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.945071   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893 && echo "ha-856893" | sudo tee /etc/hostname
	I0703 23:05:10.067286   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:05:10.067311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.069961   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070287   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.070308   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070498   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.070682   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.070896   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.071050   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.071212   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.071414   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.071431   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:05:10.189893   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:10.189928   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:05:10.189959   27242 buildroot.go:174] setting up certificates
	I0703 23:05:10.189968   27242 provision.go:84] configureAuth start
	I0703 23:05:10.189976   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:10.190275   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:10.193226   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193602   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.193625   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193795   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.195779   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196097   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.196119   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196195   27242 provision.go:143] copyHostCerts
	I0703 23:05:10.196234   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196277   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:05:10.196304   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196383   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:05:10.196499   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196528   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:05:10.196537   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196576   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:05:10.196682   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196702   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:05:10.196708   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196732   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:05:10.196780   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893 san=[127.0.0.1 192.168.39.172 ha-856893 localhost minikube]
	I0703 23:05:10.449385   27242 provision.go:177] copyRemoteCerts
	I0703 23:05:10.449453   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:05:10.449480   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.452086   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452311   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.452338   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452543   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.452743   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.452885   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.452991   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.538502   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:05:10.538569   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:05:10.565459   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:05:10.565517   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:05:10.591713   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:05:10.591782   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0703 23:05:10.620534   27242 provision.go:87] duration metric: took 430.554362ms to configureAuth
	I0703 23:05:10.620571   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:05:10.620750   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:10.620845   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.623353   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623771   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.623799   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623935   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.624152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624325   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624439   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.624606   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.624765   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.624779   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:05:10.904599   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:05:10.904631   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:05:10.904641   27242 main.go:141] libmachine: (ha-856893) Calling .GetURL
	I0703 23:05:10.905870   27242 main.go:141] libmachine: (ha-856893) DBG | Using libvirt version 6000000
	I0703 23:05:10.907791   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908127   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.908151   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908372   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:05:10.908390   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:05:10.908398   27242 client.go:171] duration metric: took 21.514686715s to LocalClient.Create
	I0703 23:05:10.908429   27242 start.go:167] duration metric: took 21.514763646s to libmachine.API.Create "ha-856893"
	I0703 23:05:10.908441   27242 start.go:293] postStartSetup for "ha-856893" (driver="kvm2")
	I0703 23:05:10.908451   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:05:10.908484   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:10.908725   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:05:10.908748   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.910851   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911184   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.911225   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911349   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.911538   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.911687   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.911796   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.994829   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:05:10.999699   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:05:10.999723   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:05:10.999787   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:05:10.999867   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:05:10.999903   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:05:11.000007   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:05:11.010870   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:11.041611   27242 start.go:296] duration metric: took 133.157203ms for postStartSetup
	I0703 23:05:11.041689   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:11.042230   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.045028   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045417   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.045449   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045801   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:11.046044   27242 start.go:128] duration metric: took 21.670585889s to createHost
	I0703 23:05:11.046071   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.048601   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.048906   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.048929   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.049092   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.049289   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049641   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.049848   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:11.050029   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:11.050041   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:05:11.156804   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047911.130080211
	
	I0703 23:05:11.156825   27242 fix.go:216] guest clock: 1720047911.130080211
	I0703 23:05:11.156833   27242 fix.go:229] Guest: 2024-07-03 23:05:11.130080211 +0000 UTC Remote: 2024-07-03 23:05:11.046058241 +0000 UTC m=+21.776314180 (delta=84.02197ms)
	I0703 23:05:11.156877   27242 fix.go:200] guest clock delta is within tolerance: 84.02197ms
	I0703 23:05:11.156884   27242 start.go:83] releasing machines lock for "ha-856893", held for 21.781493772s
	I0703 23:05:11.156910   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.157171   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.159661   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.159989   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.160008   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.160187   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160682   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160849   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160925   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:05:11.160975   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.161091   27242 ssh_runner.go:195] Run: cat /version.json
	I0703 23:05:11.161115   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.163570   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163644   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163933   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163969   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163996   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164083   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164233   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164361   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164513   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165190   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165203   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165456   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.165594   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.264903   27242 ssh_runner.go:195] Run: systemctl --version
	I0703 23:05:11.271362   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:05:11.431766   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:05:11.437888   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:05:11.437960   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:05:11.456204   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:05:11.456228   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:05:11.456282   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:05:11.478288   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:05:11.496504   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:05:11.496546   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:05:11.513312   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:05:11.529272   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:05:11.651791   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:05:11.833740   27242 docker.go:233] disabling docker service ...
	I0703 23:05:11.833798   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:05:11.850082   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:05:11.864945   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:05:11.993322   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:05:12.121368   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:05:12.136604   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:05:12.156727   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:05:12.156790   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.168812   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:05:12.168881   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.181117   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.193084   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.204859   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:05:12.217389   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.229489   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.248248   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.260054   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:05:12.270988   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:05:12.271050   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:05:12.285900   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:05:12.296588   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:12.421931   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:05:12.567694   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:05:12.567771   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:05:12.573160   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:05:12.573227   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:05:12.577204   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:05:12.618785   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:05:12.618858   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.648983   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.680410   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:05:12.681677   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:12.684268   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684586   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:12.684615   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684826   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:05:12.689291   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:12.702754   27242 kubeadm.go:877] updating cluster {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:05:12.702853   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:12.702897   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:12.737089   27242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 23:05:12.737156   27242 ssh_runner.go:195] Run: which lz4
	I0703 23:05:12.741174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0703 23:05:12.741275   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 23:05:12.745594   27242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:05:12.745632   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 23:05:14.273244   27242 crio.go:462] duration metric: took 1.531990406s to copy over tarball
	I0703 23:05:14.273329   27242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:05:16.532872   27242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.259515995s)
	I0703 23:05:16.532901   27242 crio.go:469] duration metric: took 2.259629155s to extract the tarball
	I0703 23:05:16.532912   27242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:05:16.571634   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:16.617842   27242 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:05:16.617868   27242 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:05:16.617876   27242 kubeadm.go:928] updating node { 192.168.39.172 8443 v1.30.2 crio true true} ...
	I0703 23:05:16.617964   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:05:16.618023   27242 ssh_runner.go:195] Run: crio config
	I0703 23:05:16.664162   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:16.664181   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:16.664189   27242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:05:16.664210   27242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-856893 NodeName:ha-856893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:05:16.664387   27242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-856893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:05:16.664413   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:05:16.664474   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:05:16.682379   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:05:16.682508   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:05:16.682575   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:05:16.693673   27242 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:05:16.693753   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0703 23:05:16.704380   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0703 23:05:16.722634   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:05:16.740879   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0703 23:05:16.759081   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0703 23:05:16.777539   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:05:16.781905   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:16.795594   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:16.932173   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:05:16.960438   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.172
	I0703 23:05:16.960457   27242 certs.go:194] generating shared ca certs ...
	I0703 23:05:16.960471   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:16.960625   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:05:16.960687   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:05:16.960701   27242 certs.go:256] generating profile certs ...
	I0703 23:05:16.960769   27242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:05:16.960789   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt with IP's: []
	I0703 23:05:17.180299   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt ...
	I0703 23:05:17.180327   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt: {Name:mked142f33e96cc69e07cbef413ceae8eaadb6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180495   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key ...
	I0703 23:05:17.180505   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key: {Name:mkda59ba7700af447f9573712b80d771070e40e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180580   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89
	I0703 23:05:17.180594   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.254]
	I0703 23:05:17.268855   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 ...
	I0703 23:05:17.268884   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89: {Name:mk564c544d24be22e8d81f70b99af5878e84b732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269036   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 ...
	I0703 23:05:17.269054   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89: {Name:mk2b21d824f1f5ef781a1bb28b7c84b56246aa84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269126   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:05:17.269222   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:05:17.269280   27242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:05:17.269296   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt with IP's: []
	I0703 23:05:17.337820   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt ...
	I0703 23:05:17.337850   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt: {Name:mk56d081fd7b738fa50b488ebdec0c915931f1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338007   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key ...
	I0703 23:05:17.338017   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key: {Name:mk1bfcc2bc169c4499f89205b355a5beb44be061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:05:17.338101   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:05:17.338111   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:05:17.338124   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:05:17.338136   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:05:17.338155   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:05:17.338167   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:05:17.338184   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:05:17.338228   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:05:17.338258   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:05:17.338267   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:05:17.338290   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:05:17.338309   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:05:17.338334   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:05:17.338368   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:17.338396   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.338409   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.338422   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.338943   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:05:17.367294   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:05:17.394625   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:05:17.421449   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:05:17.448364   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 23:05:17.478967   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:05:17.507381   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:05:17.535692   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:05:17.564746   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:05:17.592808   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:05:17.620310   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:05:17.648069   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:05:17.666458   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:05:17.673016   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:05:17.685065   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690329   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690403   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.696993   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:05:17.709145   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:05:17.721321   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726475   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726555   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.732930   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:05:17.744956   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:05:17.759349   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769931   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769997   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.777908   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:05:17.793803   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:05:17.798683   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:05:17.798746   27242 kubeadm.go:391] StartCluster: {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:05:17.798856   27242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:05:17.798950   27242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:05:17.857895   27242 cri.go:89] found id: ""
	I0703 23:05:17.857958   27242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:05:17.869751   27242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:05:17.881191   27242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:05:17.892752   27242 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:05:17.892774   27242 kubeadm.go:156] found existing configuration files:
	
	I0703 23:05:17.892815   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:05:17.904127   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:05:17.904196   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:05:17.916159   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:05:17.927292   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:05:17.927363   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:05:17.938640   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.949163   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:05:17.949218   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.960636   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:05:17.971220   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:05:17.971276   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:05:17.982313   27242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:05:18.243554   27242 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:05:28.408397   27242 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 23:05:28.408485   27242 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:05:28.408605   27242 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:05:28.408745   27242 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:05:28.408866   27242 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:05:28.408942   27242 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:05:28.410573   27242 out.go:204]   - Generating certificates and keys ...
	I0703 23:05:28.410647   27242 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:05:28.410731   27242 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:05:28.410801   27242 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:05:28.410850   27242 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:05:28.410900   27242 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:05:28.410954   27242 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:05:28.411002   27242 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:05:28.411118   27242 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411163   27242 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:05:28.411315   27242 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411421   27242 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:05:28.411509   27242 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:05:28.411572   27242 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:05:28.411648   27242 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:05:28.411722   27242 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:05:28.411796   27242 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 23:05:28.411892   27242 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:05:28.411981   27242 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:05:28.412064   27242 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:05:28.412191   27242 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:05:28.412266   27242 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:05:28.413911   27242 out.go:204]   - Booting up control plane ...
	I0703 23:05:28.414019   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:05:28.414100   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:05:28.414173   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:05:28.414325   27242 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:05:28.414456   27242 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:05:28.414501   27242 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:05:28.414606   27242 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 23:05:28.414662   27242 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 23:05:28.414710   27242 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.527133ms
	I0703 23:05:28.414781   27242 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 23:05:28.414827   27242 kubeadm.go:309] [api-check] The API server is healthy after 6.123038103s
	I0703 23:05:28.414915   27242 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 23:05:28.415058   27242 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 23:05:28.415150   27242 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 23:05:28.415339   27242 kubeadm.go:309] [mark-control-plane] Marking the node ha-856893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 23:05:28.415422   27242 kubeadm.go:309] [bootstrap-token] Using token: 12qvkr.qb869phsnq1wz0rf
	I0703 23:05:28.416767   27242 out.go:204]   - Configuring RBAC rules ...
	I0703 23:05:28.416884   27242 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 23:05:28.416965   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 23:05:28.417123   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 23:05:28.417274   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 23:05:28.417401   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 23:05:28.417511   27242 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 23:05:28.417640   27242 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 23:05:28.417710   27242 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 23:05:28.417779   27242 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 23:05:28.417788   27242 kubeadm.go:309] 
	I0703 23:05:28.417861   27242 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 23:05:28.417870   27242 kubeadm.go:309] 
	I0703 23:05:28.417956   27242 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 23:05:28.417970   27242 kubeadm.go:309] 
	I0703 23:05:28.418024   27242 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 23:05:28.418077   27242 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 23:05:28.418120   27242 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 23:05:28.418126   27242 kubeadm.go:309] 
	I0703 23:05:28.418170   27242 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 23:05:28.418175   27242 kubeadm.go:309] 
	I0703 23:05:28.418218   27242 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 23:05:28.418224   27242 kubeadm.go:309] 
	I0703 23:05:28.418276   27242 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 23:05:28.418364   27242 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 23:05:28.418464   27242 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 23:05:28.418474   27242 kubeadm.go:309] 
	I0703 23:05:28.418584   27242 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 23:05:28.418691   27242 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 23:05:28.418700   27242 kubeadm.go:309] 
	I0703 23:05:28.418808   27242 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.418931   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 23:05:28.418963   27242 kubeadm.go:309] 	--control-plane 
	I0703 23:05:28.418970   27242 kubeadm.go:309] 
	I0703 23:05:28.419071   27242 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 23:05:28.419080   27242 kubeadm.go:309] 
	I0703 23:05:28.419141   27242 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.419289   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 23:05:28.419304   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:28.419312   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:28.420892   27242 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0703 23:05:28.422220   27242 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0703 23:05:28.428330   27242 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0703 23:05:28.428351   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0703 23:05:28.449233   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0703 23:05:28.863177   27242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 23:05:28.863315   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:28.863314   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893 minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=true
	I0703 23:05:28.927963   27242 ops.go:34] apiserver oom_adj: -16
	I0703 23:05:29.030917   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:29.531769   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.031402   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.531013   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.031167   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.531765   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.031213   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.531657   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.031757   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.531759   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.031901   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.531406   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.032024   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.531604   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.031112   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.531193   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.031109   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.531156   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.031136   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.531321   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.031594   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.531996   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.031087   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.157208   27242 kubeadm.go:1107] duration metric: took 11.293952239s to wait for elevateKubeSystemPrivileges
	W0703 23:05:40.157241   27242 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0703 23:05:40.157249   27242 kubeadm.go:393] duration metric: took 22.358506374s to StartCluster
	I0703 23:05:40.157267   27242 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.157330   27242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.157993   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.158199   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0703 23:05:40.158198   27242 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:40.158313   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:05:40.158221   27242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 23:05:40.158334   27242 addons.go:69] Setting storage-provisioner=true in profile "ha-856893"
	I0703 23:05:40.158356   27242 addons.go:234] Setting addon storage-provisioner=true in "ha-856893"
	I0703 23:05:40.158384   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.158405   27242 addons.go:69] Setting default-storageclass=true in profile "ha-856893"
	I0703 23:05:40.158434   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:40.158449   27242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-856893"
	I0703 23:05:40.158795   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158820   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.158913   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158949   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.173903   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0703 23:05:40.174071   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0703 23:05:40.174340   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174543   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174803   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.174833   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175065   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.175086   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175156   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175396   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175549   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.175675   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.175698   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.177715   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.177916   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0703 23:05:40.178324   27242 cert_rotation.go:137] Starting client certificate rotation controller
	I0703 23:05:40.178475   27242 addons.go:234] Setting addon default-storageclass=true in "ha-856893"
	I0703 23:05:40.178516   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.178892   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.178922   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.191846   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0703 23:05:40.192316   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.192861   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.192886   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.193260   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.193465   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.194323   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0703 23:05:40.194798   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.195263   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.195279   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.195308   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.195583   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.196026   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.196053   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.197291   27242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:05:40.198820   27242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.198841   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 23:05:40.198867   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.202098   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202535   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.202559   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202726   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.202940   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.203083   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.203211   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.211653   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0703 23:05:40.212071   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.212561   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.212584   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.212866   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.213033   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.214663   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.214886   27242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.214899   27242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 23:05:40.214912   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.217534   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.217883   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.217908   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.218063   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.218258   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.218411   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.218546   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.267153   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0703 23:05:40.358079   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.358732   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.781574   27242 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0703 23:05:41.167935   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.167961   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168003   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168024   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168442   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168453   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168444   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168463   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168467   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168491   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168500   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168507   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168472   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168551   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168750   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168769   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168779   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168794   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168802   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168915   27242 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0703 23:05:41.168924   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.168933   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.168937   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.179174   27242 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0703 23:05:41.179856   27242 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0703 23:05:41.179872   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.179901   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.179907   27242 round_trippers.go:473]     Content-Type: application/json
	I0703 23:05:41.179911   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.184900   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:05:41.185231   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.185253   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.185557   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.185577   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.185585   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.187828   27242 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0703 23:05:41.188847   27242 addons.go:510] duration metric: took 1.03063116s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0703 23:05:41.188886   27242 start.go:245] waiting for cluster config update ...
	I0703 23:05:41.188901   27242 start.go:254] writing updated cluster config ...
	I0703 23:05:41.190310   27242 out.go:177] 
	I0703 23:05:41.191599   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:41.191664   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.193011   27242 out.go:177] * Starting "ha-856893-m02" control-plane node in "ha-856893" cluster
	I0703 23:05:41.194050   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:41.194075   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:05:41.194179   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:05:41.194194   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:05:41.194269   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.194484   27242 start.go:360] acquireMachinesLock for ha-856893-m02: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:05:41.194535   27242 start.go:364] duration metric: took 29.239µs to acquireMachinesLock for "ha-856893-m02"
	I0703 23:05:41.194552   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:41.194614   27242 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0703 23:05:41.195906   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:05:41.195988   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:41.196019   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:41.210406   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0703 23:05:41.210841   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:41.211288   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:41.211309   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:41.211576   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:41.211756   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:05:41.211861   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:05:41.212057   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:05:41.212087   27242 client.go:168] LocalClient.Create starting
	I0703 23:05:41.212116   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:05:41.212148   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212165   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212230   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:05:41.212264   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212288   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212315   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:05:41.212327   27242 main.go:141] libmachine: (ha-856893-m02) Calling .PreCreateCheck
	I0703 23:05:41.212497   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:05:41.212940   27242 main.go:141] libmachine: Creating machine...
	I0703 23:05:41.212958   27242 main.go:141] libmachine: (ha-856893-m02) Calling .Create
	I0703 23:05:41.213096   27242 main.go:141] libmachine: (ha-856893-m02) Creating KVM machine...
	I0703 23:05:41.214567   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing default KVM network
	I0703 23:05:41.214736   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing private KVM network mk-ha-856893
	I0703 23:05:41.214862   27242 main.go:141] libmachine: (ha-856893-m02) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.214887   27242 main.go:141] libmachine: (ha-856893-m02) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:05:41.214947   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.214842   27608 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.215063   27242 main.go:141] libmachine: (ha-856893-m02) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:05:41.436860   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.436749   27608 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa...
	I0703 23:05:41.523744   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523612   27608 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk...
	I0703 23:05:41.523793   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing magic tar header
	I0703 23:05:41.523828   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing SSH key tar header
	I0703 23:05:41.523850   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523749   27608 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.523869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02
	I0703 23:05:41.523955   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:05:41.523978   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 (perms=drwx------)
	I0703 23:05:41.523990   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.524009   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:05:41.524021   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:05:41.524031   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:05:41.524041   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home
	I0703 23:05:41.524065   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:05:41.524084   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:05:41.524093   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Skipping /home - not owner
	I0703 23:05:41.524132   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:05:41.524151   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:05:41.524184   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:05:41.524203   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:41.525176   27242 main.go:141] libmachine: (ha-856893-m02) define libvirt domain using xml: 
	I0703 23:05:41.525194   27242 main.go:141] libmachine: (ha-856893-m02) <domain type='kvm'>
	I0703 23:05:41.525204   27242 main.go:141] libmachine: (ha-856893-m02)   <name>ha-856893-m02</name>
	I0703 23:05:41.525211   27242 main.go:141] libmachine: (ha-856893-m02)   <memory unit='MiB'>2200</memory>
	I0703 23:05:41.525218   27242 main.go:141] libmachine: (ha-856893-m02)   <vcpu>2</vcpu>
	I0703 23:05:41.525225   27242 main.go:141] libmachine: (ha-856893-m02)   <features>
	I0703 23:05:41.525234   27242 main.go:141] libmachine: (ha-856893-m02)     <acpi/>
	I0703 23:05:41.525250   27242 main.go:141] libmachine: (ha-856893-m02)     <apic/>
	I0703 23:05:41.525262   27242 main.go:141] libmachine: (ha-856893-m02)     <pae/>
	I0703 23:05:41.525274   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525286   27242 main.go:141] libmachine: (ha-856893-m02)   </features>
	I0703 23:05:41.525297   27242 main.go:141] libmachine: (ha-856893-m02)   <cpu mode='host-passthrough'>
	I0703 23:05:41.525308   27242 main.go:141] libmachine: (ha-856893-m02)   
	I0703 23:05:41.525316   27242 main.go:141] libmachine: (ha-856893-m02)   </cpu>
	I0703 23:05:41.525325   27242 main.go:141] libmachine: (ha-856893-m02)   <os>
	I0703 23:05:41.525336   27242 main.go:141] libmachine: (ha-856893-m02)     <type>hvm</type>
	I0703 23:05:41.525356   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='cdrom'/>
	I0703 23:05:41.525376   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='hd'/>
	I0703 23:05:41.525387   27242 main.go:141] libmachine: (ha-856893-m02)     <bootmenu enable='no'/>
	I0703 23:05:41.525398   27242 main.go:141] libmachine: (ha-856893-m02)   </os>
	I0703 23:05:41.525409   27242 main.go:141] libmachine: (ha-856893-m02)   <devices>
	I0703 23:05:41.525425   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='cdrom'>
	I0703 23:05:41.525442   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/boot2docker.iso'/>
	I0703 23:05:41.525453   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hdc' bus='scsi'/>
	I0703 23:05:41.525461   27242 main.go:141] libmachine: (ha-856893-m02)       <readonly/>
	I0703 23:05:41.525468   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525474   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='disk'>
	I0703 23:05:41.525481   27242 main.go:141] libmachine: (ha-856893-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:05:41.525510   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk'/>
	I0703 23:05:41.525531   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hda' bus='virtio'/>
	I0703 23:05:41.525547   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525564   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525578   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='mk-ha-856893'/>
	I0703 23:05:41.525589   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525602   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525613   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525639   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='default'/>
	I0703 23:05:41.525649   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525661   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525671   27242 main.go:141] libmachine: (ha-856893-m02)     <serial type='pty'>
	I0703 23:05:41.525684   27242 main.go:141] libmachine: (ha-856893-m02)       <target port='0'/>
	I0703 23:05:41.525699   27242 main.go:141] libmachine: (ha-856893-m02)     </serial>
	I0703 23:05:41.525711   27242 main.go:141] libmachine: (ha-856893-m02)     <console type='pty'>
	I0703 23:05:41.525723   27242 main.go:141] libmachine: (ha-856893-m02)       <target type='serial' port='0'/>
	I0703 23:05:41.525733   27242 main.go:141] libmachine: (ha-856893-m02)     </console>
	I0703 23:05:41.525743   27242 main.go:141] libmachine: (ha-856893-m02)     <rng model='virtio'>
	I0703 23:05:41.525757   27242 main.go:141] libmachine: (ha-856893-m02)       <backend model='random'>/dev/random</backend>
	I0703 23:05:41.525778   27242 main.go:141] libmachine: (ha-856893-m02)     </rng>
	I0703 23:05:41.525789   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525797   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525806   27242 main.go:141] libmachine: (ha-856893-m02)   </devices>
	I0703 23:05:41.525815   27242 main.go:141] libmachine: (ha-856893-m02) </domain>
	I0703 23:05:41.525826   27242 main.go:141] libmachine: (ha-856893-m02) 
	I0703 23:05:41.532564   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:87:47:a5 in network default
	I0703 23:05:41.533109   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring networks are active...
	I0703 23:05:41.533130   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:41.533788   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network default is active
	I0703 23:05:41.534054   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network mk-ha-856893 is active
	I0703 23:05:41.534401   27242 main.go:141] libmachine: (ha-856893-m02) Getting domain xml...
	I0703 23:05:41.535101   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:42.768845   27242 main.go:141] libmachine: (ha-856893-m02) Waiting to get IP...
	I0703 23:05:42.769571   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.769959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.770003   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.769952   27608 retry.go:31] will retry after 219.708119ms: waiting for machine to come up
	I0703 23:05:42.991437   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.991986   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.992017   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.991932   27608 retry.go:31] will retry after 272.434306ms: waiting for machine to come up
	I0703 23:05:43.266445   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.266888   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.266916   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.266846   27608 retry.go:31] will retry after 435.377928ms: waiting for machine to come up
	I0703 23:05:43.703359   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.703810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.703838   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.703758   27608 retry.go:31] will retry after 451.040954ms: waiting for machine to come up
	I0703 23:05:44.156129   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.156655   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.156683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.156609   27608 retry.go:31] will retry after 760.280274ms: waiting for machine to come up
	I0703 23:05:44.918103   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.918554   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.918579   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.918505   27608 retry.go:31] will retry after 698.518733ms: waiting for machine to come up
	I0703 23:05:45.618162   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:45.618587   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:45.618614   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:45.618539   27608 retry.go:31] will retry after 993.528309ms: waiting for machine to come up
	I0703 23:05:46.614158   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:46.614719   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:46.614745   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:46.614678   27608 retry.go:31] will retry after 1.327932051s: waiting for machine to come up
	I0703 23:05:47.944596   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:47.945018   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:47.945045   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:47.944978   27608 retry.go:31] will retry after 1.683564403s: waiting for machine to come up
	I0703 23:05:49.630786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:49.631090   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:49.631116   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:49.631040   27608 retry.go:31] will retry after 1.84507818s: waiting for machine to come up
	I0703 23:05:51.477398   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:51.477872   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:51.477893   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:51.477839   27608 retry.go:31] will retry after 1.786726505s: waiting for machine to come up
	I0703 23:05:53.266749   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:53.267104   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:53.267133   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:53.267086   27608 retry.go:31] will retry after 3.479688612s: waiting for machine to come up
	I0703 23:05:56.748688   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:56.749070   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:56.749097   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:56.749047   27608 retry.go:31] will retry after 3.495058467s: waiting for machine to come up
	I0703 23:06:00.248588   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:00.249038   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:06:00.249062   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:06:00.248993   27608 retry.go:31] will retry after 4.710071103s: waiting for machine to come up
	I0703 23:06:04.963165   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963558   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has current primary IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963579   27242 main.go:141] libmachine: (ha-856893-m02) Found IP for machine: 192.168.39.157
	I0703 23:06:04.963599   27242 main.go:141] libmachine: (ha-856893-m02) Reserving static IP address...
	I0703 23:06:04.963959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "ha-856893-m02", mac: "52:54:00:88:5c:3d", ip: "192.168.39.157"} in network mk-ha-856893
	I0703 23:06:05.043210   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:05.043242   27242 main.go:141] libmachine: (ha-856893-m02) Reserved static IP address: 192.168.39.157
	I0703 23:06:05.043256   27242 main.go:141] libmachine: (ha-856893-m02) Waiting for SSH to be available...
	I0703 23:06:05.045810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:05.046139   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893
	I0703 23:06:05.046163   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:88:5c:3d
	I0703 23:06:05.046324   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:05.046345   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:05.046421   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:05.046443   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:05.046462   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:05.050096   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:06:05.050114   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:06:05.050124   27242 main.go:141] libmachine: (ha-856893-m02) DBG | command : exit 0
	I0703 23:06:05.050131   27242 main.go:141] libmachine: (ha-856893-m02) DBG | err     : exit status 255
	I0703 23:06:05.050140   27242 main.go:141] libmachine: (ha-856893-m02) DBG | output  : 
	I0703 23:06:08.051925   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:08.055727   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056153   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.056179   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056333   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:08.056344   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:08.056368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:08.056380   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:08.056395   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:08.180086   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: <nil>: 
	I0703 23:06:08.180375   27242 main.go:141] libmachine: (ha-856893-m02) KVM machine creation complete!
	I0703 23:06:08.180680   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:08.181273   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181738   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:06:08.181772   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetState
	I0703 23:06:08.183073   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:06:08.183084   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:06:08.183090   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:06:08.183097   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.185510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.185869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.185885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.186103   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.186258   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186404   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186562   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.186737   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.186953   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.186971   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:06:08.287312   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.287335   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:06:08.287345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.289859   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290230   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.290255   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290391   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.290601   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290826   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290992   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.291192   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.291400   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.291413   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:06:08.397296   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:06:08.397352   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:06:08.397358   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:06:08.397365   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397596   27242 buildroot.go:166] provisioning hostname "ha-856893-m02"
	I0703 23:06:08.397609   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397805   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.400446   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.400800   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.400824   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.401028   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.401213   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401394   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401516   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.401657   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.401840   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.401855   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m02 && echo "ha-856893-m02" | sudo tee /etc/hostname
	I0703 23:06:08.520319   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m02
	
	I0703 23:06:08.520345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.522961   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523341   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.523368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523587   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.523781   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.523977   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.524116   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.524312   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.524466   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.524481   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:06:08.633867   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.633900   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:06:08.633921   27242 buildroot.go:174] setting up certificates
	I0703 23:06:08.633932   27242 provision.go:84] configureAuth start
	I0703 23:06:08.633945   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.634242   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:08.637222   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637606   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.637629   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637798   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.640510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.640861   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.640885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.641040   27242 provision.go:143] copyHostCerts
	I0703 23:06:08.641075   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641110   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:06:08.641119   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641188   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:06:08.641264   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641289   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:06:08.641295   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641319   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:06:08.641363   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641379   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:06:08.641385   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641406   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:06:08.641461   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m02 san=[127.0.0.1 192.168.39.157 ha-856893-m02 localhost minikube]
	I0703 23:06:08.796742   27242 provision.go:177] copyRemoteCerts
	I0703 23:06:08.796795   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:06:08.796849   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.799514   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.799786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.799814   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.800039   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.800233   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.800418   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.800539   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:08.882648   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:06:08.882725   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:06:08.909249   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:06:08.909332   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:06:08.935044   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:06:08.935123   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:06:08.961479   27242 provision.go:87] duration metric: took 327.532705ms to configureAuth
	I0703 23:06:08.961528   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:06:08.961731   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:08.961796   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.964260   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964562   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.964599   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964761   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.964962   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965132   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965255   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.965414   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.965748   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.965776   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:06:09.252115   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:06:09.252149   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:06:09.252160   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetURL
	I0703 23:06:09.253575   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using libvirt version 6000000
	I0703 23:06:09.255956   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256313   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.256339   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256506   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:06:09.256517   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:06:09.256522   27242 client.go:171] duration metric: took 28.044426812s to LocalClient.Create
	I0703 23:06:09.256545   27242 start.go:167] duration metric: took 28.044488456s to libmachine.API.Create "ha-856893"
	I0703 23:06:09.256558   27242 start.go:293] postStartSetup for "ha-856893-m02" (driver="kvm2")
	I0703 23:06:09.256571   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:06:09.256597   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.256867   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:06:09.256898   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.258897   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259196   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.259239   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259356   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.259535   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.259720   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.259905   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.343496   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:06:09.347947   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:06:09.347969   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:06:09.348034   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:06:09.348116   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:06:09.348127   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:06:09.348228   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:06:09.358974   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:09.386575   27242 start.go:296] duration metric: took 129.995195ms for postStartSetup
	I0703 23:06:09.386638   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:09.387232   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.389784   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390091   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.390121   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390365   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:06:09.390569   27242 start.go:128] duration metric: took 28.195940074s to createHost
	I0703 23:06:09.390602   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.392949   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393304   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.393332   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.393668   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393812   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393960   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.394148   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:09.394332   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:09.394343   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:06:09.496753   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047969.477411010
	
	I0703 23:06:09.496773   27242 fix.go:216] guest clock: 1720047969.477411010
	I0703 23:06:09.496780   27242 fix.go:229] Guest: 2024-07-03 23:06:09.47741101 +0000 UTC Remote: 2024-07-03 23:06:09.39059124 +0000 UTC m=+80.120847171 (delta=86.81977ms)
	I0703 23:06:09.496794   27242 fix.go:200] guest clock delta is within tolerance: 86.81977ms
	I0703 23:06:09.496803   27242 start.go:83] releasing machines lock for "ha-856893-m02", held for 28.302255725s
	I0703 23:06:09.496818   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.497106   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.499993   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.500377   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.500405   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.502889   27242 out.go:177] * Found network options:
	I0703 23:06:09.504348   27242 out.go:177]   - NO_PROXY=192.168.39.172
	W0703 23:06:09.505618   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.505646   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506197   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506364   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506442   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:06:09.506485   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	W0703 23:06:09.506549   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.506631   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:06:09.506648   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.509646   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.509683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510044   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510071   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510094   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510105   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510284   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510625   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510701   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510771   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510887   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.510891   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.511011   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.511022   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.748974   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:06:09.754928   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:06:09.754991   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:06:09.773195   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:06:09.773218   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:06:09.773284   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:06:09.791699   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:06:09.808279   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:06:09.808345   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:06:09.824370   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:06:09.839742   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:06:09.976077   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:06:10.157590   27242 docker.go:233] disabling docker service ...
	I0703 23:06:10.157655   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:06:10.173171   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:06:10.187323   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:06:10.317842   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:06:10.448801   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:06:10.464012   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:06:10.484552   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:06:10.484626   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.495842   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:06:10.495962   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.507047   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.518157   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.529601   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:06:10.541072   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.552143   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.570995   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.582051   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:06:10.592526   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:06:10.592586   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:06:10.607423   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:06:10.617890   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:10.738828   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:06:10.888735   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:06:10.888797   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:06:10.894395   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:06:10.894461   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:06:10.898671   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:06:10.940941   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:06:10.941015   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:10.971313   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:11.002905   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:06:11.004738   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:06:11.006065   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:11.008543   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.008879   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:11.008909   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.009050   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:06:11.013641   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:11.027727   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:06:11.027975   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:11.028270   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.028323   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.044531   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0703 23:06:11.045043   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.045558   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.045579   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.045862   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.046039   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:06:11.047494   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:11.047885   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.047930   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.062704   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0703 23:06:11.063093   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.063555   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.063572   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.063895   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.064071   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:11.064261   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.157
	I0703 23:06:11.064278   27242 certs.go:194] generating shared ca certs ...
	I0703 23:06:11.064297   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.064442   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:06:11.064488   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:06:11.064502   27242 certs.go:256] generating profile certs ...
	I0703 23:06:11.064611   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:06:11.064645   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b
	I0703 23:06:11.064664   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.254]
	I0703 23:06:11.125542   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b ...
	I0703 23:06:11.125570   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b: {Name:mk6b6ba77f2115f78526ecec09853230dd3e53c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125732   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b ...
	I0703 23:06:11.125745   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b: {Name:mkf063a91f34b3b9346f6b304c5ea881bd2f5324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125812   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:06:11.125946   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:06:11.126068   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:06:11.126083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:06:11.126094   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:06:11.126107   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:06:11.126119   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:06:11.126131   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:06:11.126143   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:06:11.126156   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:06:11.126174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:06:11.126219   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:06:11.126254   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:06:11.126262   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:06:11.126284   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:06:11.126304   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:06:11.126325   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:06:11.126365   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:11.126389   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.126403   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.126414   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.126446   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:11.129130   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129526   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:11.129547   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129763   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:11.129991   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:11.130155   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:11.130308   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:11.208220   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:06:11.214445   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:06:11.227338   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:06:11.232205   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:06:11.244770   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:06:11.249486   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:06:11.263595   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:06:11.268404   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:06:11.280311   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:06:11.284783   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:06:11.296982   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:06:11.301718   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:06:11.316760   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:06:11.344751   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:06:11.372405   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:06:11.399264   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:06:11.425913   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0703 23:06:11.453127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:06:11.480939   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:06:11.507887   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:06:11.536077   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:06:11.562896   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:06:11.589792   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:06:11.619857   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:06:11.638186   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:06:11.658574   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:06:11.681046   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:06:11.699440   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:06:11.717487   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:06:11.735967   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:06:11.756625   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:06:11.763174   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:06:11.777088   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782196   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782262   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.789061   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:06:11.802412   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:06:11.815542   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820664   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820720   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.827137   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:06:11.839737   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:06:11.852655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857826   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857882   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.863859   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:06:11.875860   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:06:11.880842   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:06:11.880910   27242 kubeadm.go:928] updating node {m02 192.168.39.157 8443 v1.30.2 crio true true} ...
	I0703 23:06:11.880993   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:06:11.881017   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:06:11.881059   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:06:11.901217   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:06:11.901292   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:06:11.901361   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.912603   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:06:11.912662   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.923700   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0703 23:06:11.923725   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:06:11.923738   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0703 23:06:11.923750   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.923823   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.930352   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:06:11.930395   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:06:18.577968   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.578050   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.584084   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:06:18.584127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:06:24.489268   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:06:24.506069   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.506160   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.510885   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:06:24.510927   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:06:24.948564   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:06:24.961462   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:06:24.980150   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:06:24.998455   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:06:25.016528   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:06:25.020797   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:25.034283   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:25.172768   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:25.191293   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:25.191893   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:25.191940   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:25.207801   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0703 23:06:25.208291   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:25.208871   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:25.208895   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:25.209219   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:25.209391   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:25.209509   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:06:25.209636   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:06:25.209656   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:25.213110   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213539   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:25.213572   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213846   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:25.214062   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:25.214220   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:25.214382   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:25.391200   27242 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:25.391247   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443"
	I0703 23:06:47.544091   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443": (22.152804646s)
	I0703 23:06:47.544127   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:06:48.068945   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m02 minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:06:48.232893   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:06:48.350705   27242 start.go:318] duration metric: took 23.141192018s to joinCluster
	I0703 23:06:48.350794   27242 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:48.351091   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:48.352341   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:06:48.353641   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:48.588280   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:48.608838   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:06:48.609120   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:06:48.609198   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:06:48.609481   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:48.609599   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:48.609611   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:48.609620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:48.609626   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:48.622593   27242 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0703 23:06:49.109815   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.109841   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.109851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.109860   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.119178   27242 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0703 23:06:49.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.609864   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.609873   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.609877   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.613800   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.110707   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.110728   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.110736   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.110740   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.114001   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.609830   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.609883   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.609896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.609903   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.613093   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.613625   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:51.109898   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.109927   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.109937   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.109943   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.113216   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:51.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.609854   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.609862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.609867   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.613350   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.110567   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.110587   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.110594   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.110598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.114275   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.610448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.610473   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.610484   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.610490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.613455   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:52.614165   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:53.110342   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.110372   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.110384   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.110390   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.113932   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:53.610596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.610615   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.610624   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.610628   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.613938   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.110534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.110616   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.110634   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.110642   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.114018   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.610334   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.610351   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.610358   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.610362   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.613905   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.614483   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:55.109792   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.109813   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.109821   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.109824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.113250   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.609747   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.609767   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.609777   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.609783   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.612716   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.613412   27242 node_ready.go:49] node "ha-856893-m02" has status "Ready":"True"
	I0703 23:06:55.613435   27242 node_ready.go:38] duration metric: took 7.003919204s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:55.613447   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:06:55.613534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:06:55.613547   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.613557   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.613562   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.618175   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.623904   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.623988   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:06:55.623996   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.624003   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.624009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.627442   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.628363   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.628382   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.628394   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.628402   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631180   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.631700   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.631719   27242 pod_ready.go:81] duration metric: took 7.786492ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631728   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631796   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:06:55.631806   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.631815   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.635897   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.636658   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.636678   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.636687   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.636692   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.639691   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.640704   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.640723   27242 pod_ready.go:81] duration metric: took 8.987769ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640734   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:06:55.640797   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.640803   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.640807   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.643359   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.643907   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.643924   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.643932   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.643936   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.646899   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.647968   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.647991   27242 pod_ready.go:81] duration metric: took 7.249953ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648004   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648071   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:55.648085   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.648095   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.648101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.650814   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.651459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.651474   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.651486   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.651490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.653793   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:56.148491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.148513   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.148521   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.148525   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.152385   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.153042   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.153060   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.153067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.153071   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.157627   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:56.649122   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.649140   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.649146   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.649149   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.652526   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.653306   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.653320   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.653327   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.653331   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.655979   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.149064   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.149092   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.149101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.149106   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.152417   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.153222   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.153241   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.153249   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.153254   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.156135   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.649140   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.649181   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.649192   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.649198   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.652477   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.653084   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.653100   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.653106   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.653111   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.655555   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.656210   27242 pod_ready.go:102] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:06:58.148254   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.148274   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.148282   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.148286   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.152590   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:58.153465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.153480   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.153488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.153495   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.156588   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:58.648596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.648622   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.648633   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.648639   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.651552   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:58.652309   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.652326   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.652333   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.652338   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.654822   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.148789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.148811   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.148820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.148824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.152583   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.153376   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.153394   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.153401   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.153406   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.156325   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.648919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.648945   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.648956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.648963   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.652540   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.653454   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.653476   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.653487   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.653508   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.658095   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:59.658913   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.658934   27242 pod_ready.go:81] duration metric: took 4.010920952s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.658949   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.659006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:06:59.659016   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.659027   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.659036   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.661826   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.662571   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:59.662588   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.662595   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.662598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.665446   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.665948   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.665968   27242 pod_ready.go:81] duration metric: took 7.012702ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.665978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.666039   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:06:59.666046   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.666053   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.666056   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.668927   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.669628   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.669644   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.669651   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.669656   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.672172   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.167115   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.167140   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.167150   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.167156   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.170205   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:00.170996   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.171017   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.171029   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.171039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.173937   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.666560   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.666581   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.666591   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.666598   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.685399   27242 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0703 23:07:00.686013   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.686031   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.686039   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.686044   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.694695   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:07:01.166491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.166515   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.166524   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.166529   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.170037   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.170694   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.170710   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.170717   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.170722   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.173354   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.666570   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.666592   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.666600   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.666603   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670182   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.670960   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.670972   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.670980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670984   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.673678   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.674253   27242 pod_ready.go:102] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:02.166192   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:02.166222   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.166234   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.166241   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.169265   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.170194   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.170209   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.170217   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.170220   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.173318   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.173900   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.173921   27242 pod_ready.go:81] duration metric: took 2.507930848s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173934   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173990   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:07:02.173999   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.174007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.174011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.177819   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.178515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:02.178531   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.178539   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.178542   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.181392   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.181852   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.181870   27242 pod_ready.go:81] duration metric: took 7.929988ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.181879   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.210176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.210204   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.210225   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.216238   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:07:02.410326   27242 request.go:629] Waited for 193.332004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410396   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410402   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.410409   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.410414   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.414343   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.682063   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.682086   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.682094   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.682099   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.685969   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.809842   27242 request.go:629] Waited for 123.198326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809924   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.809931   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.809935   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.813615   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.182561   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.182583   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.182591   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.182595   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.185818   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.210189   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.210213   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.210226   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.212835   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:03.682870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.682893   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.682904   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.682913   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.687007   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:03.687982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.688000   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.688007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.688010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.690789   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.182980   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.183005   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.183012   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.183015   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.187120   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:04.187803   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.187820   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.187827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.187832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.190585   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.191265   27242 pod_ready.go:102] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:04.682068   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.682093   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.682101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.682105   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.685315   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.686021   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.686042   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.686051   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.686060   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.689699   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.690333   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.690354   27242 pod_ready.go:81] duration metric: took 2.508468638s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690363   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:07:04.690423   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.690429   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.690433   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.693270   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.810198   27242 request.go:629] Waited for 116.3003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810277   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810287   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.810297   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.810306   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.813548   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.814288   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.814310   27242 pod_ready.go:81] duration metric: took 123.940721ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.814321   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.009731   27242 request.go:629] Waited for 195.334691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009801   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009812   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.009823   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.009831   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.013135   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.209785   27242 request.go:629] Waited for 196.045433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209863   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209876   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.209888   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.209896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.213369   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.213938   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.213964   27242 pod_ready.go:81] duration metric: took 399.631019ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.213978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.410292   27242 request.go:629] Waited for 196.24208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410382   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.410392   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.410398   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.413436   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.610477   27242 request.go:629] Waited for 196.362666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610529   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610542   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.610550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.610554   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.613467   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:05.613972   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.613988   27242 pod_ready.go:81] duration metric: took 399.999359ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.613996   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.810106   27242 request.go:629] Waited for 196.052695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810185   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.810209   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.810232   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.813771   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.009910   27242 request.go:629] Waited for 195.274604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009992   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.010002   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.010010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.013701   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.014446   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:06.014463   27242 pod_ready.go:81] duration metric: took 400.459709ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:06.014476   27242 pod_ready.go:38] duration metric: took 10.401015204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:07:06.014493   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:07:06.014549   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:07:06.030327   27242 api_server.go:72] duration metric: took 17.679497097s to wait for apiserver process to appear ...
	I0703 23:07:06.030347   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:07:06.030365   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:07:06.036783   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:07:06.036854   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:07:06.036859   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.036867   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.036872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.037690   27242 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0703 23:07:06.037801   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:07:06.037818   27242 api_server.go:131] duration metric: took 7.465872ms to wait for apiserver health ...
	I0703 23:07:06.037825   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:07:06.209877   27242 request.go:629] Waited for 171.974222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210016   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210032   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.210040   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.210046   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.214918   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.219567   27242 system_pods.go:59] 17 kube-system pods found
	I0703 23:07:06.219598   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.219602   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.219607   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.219610   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.219614   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.219617   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.219620   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.219623   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.219628   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.219637   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.219643   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.219648   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.219658   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.219664   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.219669   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.219676   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.219682   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.219693   27242 system_pods.go:74] duration metric: took 181.861646ms to wait for pod list to return data ...
	I0703 23:07:06.219700   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:07:06.410182   27242 request.go:629] Waited for 190.397554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410264   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410274   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.410285   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.410289   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.413289   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:06.413480   27242 default_sa.go:45] found service account: "default"
	I0703 23:07:06.413495   27242 default_sa.go:55] duration metric: took 193.786983ms for default service account to be created ...
	I0703 23:07:06.413503   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:07:06.609837   27242 request.go:629] Waited for 196.27709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609895   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609901   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.609908   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.609912   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.614868   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.619343   27242 system_pods.go:86] 17 kube-system pods found
	I0703 23:07:06.619371   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.619376   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.619380   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.619384   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.619388   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.619392   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.619395   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.619400   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.619404   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.619408   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.619412   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.619416   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.619420   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.619424   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.619428   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.619433   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.619437   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.619444   27242 system_pods.go:126] duration metric: took 205.937561ms to wait for k8s-apps to be running ...
	I0703 23:07:06.619453   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:07:06.619502   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:06.636194   27242 system_svc.go:56] duration metric: took 16.729677ms WaitForService to wait for kubelet
	I0703 23:07:06.636223   27242 kubeadm.go:576] duration metric: took 18.285397296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:07:06.636240   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:07:06.810678   27242 request.go:629] Waited for 174.367698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810751   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810759   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.810766   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.810773   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.814396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.815321   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815347   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815358   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815361   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815365   27242 node_conditions.go:105] duration metric: took 179.120869ms to run NodePressure ...
	I0703 23:07:06.815375   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:07:06.815405   27242 start.go:254] writing updated cluster config ...
	I0703 23:07:06.817467   27242 out.go:177] 
	I0703 23:07:06.818836   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:06.818926   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.820500   27242 out.go:177] * Starting "ha-856893-m03" control-plane node in "ha-856893" cluster
	I0703 23:07:06.821716   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:07:06.821732   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:07:06.821877   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:07:06.821891   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:07:06.821981   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.822155   27242 start.go:360] acquireMachinesLock for ha-856893-m03: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:07:06.822195   27242 start.go:364] duration metric: took 22.144µs to acquireMachinesLock for "ha-856893-m03"
	I0703 23:07:06.822209   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:06.822295   27242 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0703 23:07:06.823658   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:07:06.823727   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:06.823756   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:06.838452   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0703 23:07:06.838936   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:06.839363   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:06.839383   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:06.839736   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:06.839918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:06.840069   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:06.840226   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:07:06.840254   27242 client.go:168] LocalClient.Create starting
	I0703 23:07:06.840290   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:07:06.840327   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840346   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840410   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:07:06.840432   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840449   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840474   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:07:06.840485   27242 main.go:141] libmachine: (ha-856893-m03) Calling .PreCreateCheck
	I0703 23:07:06.840643   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:06.841024   27242 main.go:141] libmachine: Creating machine...
	I0703 23:07:06.841038   27242 main.go:141] libmachine: (ha-856893-m03) Calling .Create
	I0703 23:07:06.841188   27242 main.go:141] libmachine: (ha-856893-m03) Creating KVM machine...
	I0703 23:07:06.842688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing default KVM network
	I0703 23:07:06.842868   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing private KVM network mk-ha-856893
	I0703 23:07:06.843022   27242 main.go:141] libmachine: (ha-856893-m03) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:06.843049   27242 main.go:141] libmachine: (ha-856893-m03) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:07:06.843102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:06.842997   28071 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:06.843189   27242 main.go:141] libmachine: (ha-856893-m03) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:07:07.067762   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.067633   28071 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa...
	I0703 23:07:07.216110   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.215993   28071 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk...
	I0703 23:07:07.216138   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing magic tar header
	I0703 23:07:07.216158   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing SSH key tar header
	I0703 23:07:07.216172   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.216113   28071 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:07.216256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03
	I0703 23:07:07.216285   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 (perms=drwx------)
	I0703 23:07:07.216298   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:07:07.216313   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:07:07.216337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:07.216352   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:07:07.216366   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:07:07.216383   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:07:07.216405   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:07:07.216424   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:07:07.216451   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home
	I0703 23:07:07.216463   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Skipping /home - not owner
	I0703 23:07:07.216477   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:07:07.216497   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:07:07.216508   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:07.217338   27242 main.go:141] libmachine: (ha-856893-m03) define libvirt domain using xml: 
	I0703 23:07:07.217359   27242 main.go:141] libmachine: (ha-856893-m03) <domain type='kvm'>
	I0703 23:07:07.217366   27242 main.go:141] libmachine: (ha-856893-m03)   <name>ha-856893-m03</name>
	I0703 23:07:07.217375   27242 main.go:141] libmachine: (ha-856893-m03)   <memory unit='MiB'>2200</memory>
	I0703 23:07:07.217404   27242 main.go:141] libmachine: (ha-856893-m03)   <vcpu>2</vcpu>
	I0703 23:07:07.217426   27242 main.go:141] libmachine: (ha-856893-m03)   <features>
	I0703 23:07:07.217439   27242 main.go:141] libmachine: (ha-856893-m03)     <acpi/>
	I0703 23:07:07.217450   27242 main.go:141] libmachine: (ha-856893-m03)     <apic/>
	I0703 23:07:07.217460   27242 main.go:141] libmachine: (ha-856893-m03)     <pae/>
	I0703 23:07:07.217471   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217482   27242 main.go:141] libmachine: (ha-856893-m03)   </features>
	I0703 23:07:07.217492   27242 main.go:141] libmachine: (ha-856893-m03)   <cpu mode='host-passthrough'>
	I0703 23:07:07.217510   27242 main.go:141] libmachine: (ha-856893-m03)   
	I0703 23:07:07.217527   27242 main.go:141] libmachine: (ha-856893-m03)   </cpu>
	I0703 23:07:07.217543   27242 main.go:141] libmachine: (ha-856893-m03)   <os>
	I0703 23:07:07.217559   27242 main.go:141] libmachine: (ha-856893-m03)     <type>hvm</type>
	I0703 23:07:07.217570   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='cdrom'/>
	I0703 23:07:07.217575   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='hd'/>
	I0703 23:07:07.217583   27242 main.go:141] libmachine: (ha-856893-m03)     <bootmenu enable='no'/>
	I0703 23:07:07.217591   27242 main.go:141] libmachine: (ha-856893-m03)   </os>
	I0703 23:07:07.217599   27242 main.go:141] libmachine: (ha-856893-m03)   <devices>
	I0703 23:07:07.217604   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='cdrom'>
	I0703 23:07:07.217614   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/boot2docker.iso'/>
	I0703 23:07:07.217621   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hdc' bus='scsi'/>
	I0703 23:07:07.217635   27242 main.go:141] libmachine: (ha-856893-m03)       <readonly/>
	I0703 23:07:07.217651   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217665   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='disk'>
	I0703 23:07:07.217676   27242 main.go:141] libmachine: (ha-856893-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:07:07.217694   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk'/>
	I0703 23:07:07.217706   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hda' bus='virtio'/>
	I0703 23:07:07.217718   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217733   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217747   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='mk-ha-856893'/>
	I0703 23:07:07.217757   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217767   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217778   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217804   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='default'/>
	I0703 23:07:07.217821   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217830   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217837   27242 main.go:141] libmachine: (ha-856893-m03)     <serial type='pty'>
	I0703 23:07:07.217844   27242 main.go:141] libmachine: (ha-856893-m03)       <target port='0'/>
	I0703 23:07:07.217853   27242 main.go:141] libmachine: (ha-856893-m03)     </serial>
	I0703 23:07:07.217862   27242 main.go:141] libmachine: (ha-856893-m03)     <console type='pty'>
	I0703 23:07:07.217873   27242 main.go:141] libmachine: (ha-856893-m03)       <target type='serial' port='0'/>
	I0703 23:07:07.217883   27242 main.go:141] libmachine: (ha-856893-m03)     </console>
	I0703 23:07:07.217893   27242 main.go:141] libmachine: (ha-856893-m03)     <rng model='virtio'>
	I0703 23:07:07.217903   27242 main.go:141] libmachine: (ha-856893-m03)       <backend model='random'>/dev/random</backend>
	I0703 23:07:07.217917   27242 main.go:141] libmachine: (ha-856893-m03)     </rng>
	I0703 23:07:07.217941   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217959   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217972   27242 main.go:141] libmachine: (ha-856893-m03)   </devices>
	I0703 23:07:07.217982   27242 main.go:141] libmachine: (ha-856893-m03) </domain>
	I0703 23:07:07.217997   27242 main.go:141] libmachine: (ha-856893-m03) 
	I0703 23:07:07.224727   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:c9:f0:2c in network default
	I0703 23:07:07.225301   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:07.225318   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring networks are active...
	I0703 23:07:07.226041   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network default is active
	I0703 23:07:07.226394   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network mk-ha-856893 is active
	I0703 23:07:07.226752   27242 main.go:141] libmachine: (ha-856893-m03) Getting domain xml...
	I0703 23:07:07.227531   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:08.474940   27242 main.go:141] libmachine: (ha-856893-m03) Waiting to get IP...
	I0703 23:07:08.475929   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.476406   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.476429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.476388   28071 retry.go:31] will retry after 297.28942ms: waiting for machine to come up
	I0703 23:07:08.775075   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.775657   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.775687   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.775611   28071 retry.go:31] will retry after 260.487003ms: waiting for machine to come up
	I0703 23:07:09.038093   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.038543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.038570   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.038494   28071 retry.go:31] will retry after 356.550698ms: waiting for machine to come up
	I0703 23:07:09.396841   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.397258   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.397282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.397203   28071 retry.go:31] will retry after 565.372677ms: waiting for machine to come up
	I0703 23:07:09.963728   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.964167   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.964188   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.964122   28071 retry.go:31] will retry after 573.536697ms: waiting for machine to come up
	I0703 23:07:10.539640   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:10.540032   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:10.540082   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:10.540012   28071 retry.go:31] will retry after 887.46227ms: waiting for machine to come up
	I0703 23:07:11.430282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:11.430740   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:11.430768   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:11.430695   28071 retry.go:31] will retry after 941.491473ms: waiting for machine to come up
	I0703 23:07:12.373968   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:12.374294   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:12.374322   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:12.374269   28071 retry.go:31] will retry after 1.104133505s: waiting for machine to come up
	I0703 23:07:13.479543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:13.480022   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:13.480050   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:13.479968   28071 retry.go:31] will retry after 1.21416202s: waiting for machine to come up
	I0703 23:07:14.696397   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:14.696937   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:14.696966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:14.696888   28071 retry.go:31] will retry after 1.787823566s: waiting for machine to come up
	I0703 23:07:16.486978   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:16.487567   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:16.487594   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:16.487515   28071 retry.go:31] will retry after 2.71693532s: waiting for machine to come up
	I0703 23:07:19.206063   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:19.206532   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:19.206556   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:19.206496   28071 retry.go:31] will retry after 2.779815264s: waiting for machine to come up
	I0703 23:07:21.987373   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:21.987801   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:21.987822   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:21.987757   28071 retry.go:31] will retry after 4.466413602s: waiting for machine to come up
	I0703 23:07:26.457850   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:26.458259   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:26.458289   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:26.458211   28071 retry.go:31] will retry after 4.340225073s: waiting for machine to come up
	I0703 23:07:30.801191   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801617   27242 main.go:141] libmachine: (ha-856893-m03) Found IP for machine: 192.168.39.186
	I0703 23:07:30.801638   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has current primary IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801645   27242 main.go:141] libmachine: (ha-856893-m03) Reserving static IP address...
	I0703 23:07:30.801999   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "ha-856893-m03", mac: "52:54:00:cb:e8:37", ip: "192.168.39.186"} in network mk-ha-856893
	I0703 23:07:30.882616   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:30.882638   27242 main.go:141] libmachine: (ha-856893-m03) Reserved static IP address: 192.168.39.186
	I0703 23:07:30.882649   27242 main.go:141] libmachine: (ha-856893-m03) Waiting for SSH to be available...
	I0703 23:07:30.885337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.885691   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893
	I0703 23:07:30.885733   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:cb:e8:37
	I0703 23:07:30.885860   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:30.885892   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:30.885924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:30.885938   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:30.885954   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:30.889872   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:07:30.889897   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:07:30.889906   27242 main.go:141] libmachine: (ha-856893-m03) DBG | command : exit 0
	I0703 23:07:30.889912   27242 main.go:141] libmachine: (ha-856893-m03) DBG | err     : exit status 255
	I0703 23:07:30.889924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | output  : 
	I0703 23:07:33.891677   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:33.894047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894452   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:33.894489   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894620   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:33.894646   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:33.894674   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:33.894692   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:33.894713   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:34.020118   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: <nil>: 
	I0703 23:07:34.020375   27242 main.go:141] libmachine: (ha-856893-m03) KVM machine creation complete!
	I0703 23:07:34.020757   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:34.021289   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021526   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021689   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:07:34.021707   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetState
	I0703 23:07:34.023123   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:07:34.023138   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:07:34.023143   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:07:34.023149   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.025507   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.025894   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.025914   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.026099   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.026281   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026437   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026592   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.026726   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.026934   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.026944   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:07:34.135745   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.135768   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:07:34.135780   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.138736   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139145   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.139180   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139394   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.139768   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.139989   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.140173   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.140391   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.140627   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.140645   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:07:34.252832   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:07:34.252930   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:07:34.252950   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:07:34.252959   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253225   27242 buildroot.go:166] provisioning hostname "ha-856893-m03"
	I0703 23:07:34.253251   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253430   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.256044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256422   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.256449   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256567   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.256736   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.256887   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.257011   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.257189   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.257390   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.257403   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m03 && echo "ha-856893-m03" | sudo tee /etc/hostname
	I0703 23:07:34.378754   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m03
	
	I0703 23:07:34.378782   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.381654   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.381966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.382002   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.382235   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.382443   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382616   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.382982   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.383164   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.383188   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:07:34.499458   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.499488   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:07:34.499506   27242 buildroot.go:174] setting up certificates
	I0703 23:07:34.499514   27242 provision.go:84] configureAuth start
	I0703 23:07:34.499522   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.499784   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:34.503044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503446   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.503473   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503688   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.506053   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506402   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.506429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506591   27242 provision.go:143] copyHostCerts
	I0703 23:07:34.506619   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506654   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:07:34.506666   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506747   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:07:34.506861   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506886   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:07:34.506891   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506928   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:07:34.506984   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507007   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:07:34.507016   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507046   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:07:34.507111   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m03 san=[127.0.0.1 192.168.39.186 ha-856893-m03 localhost minikube]
	I0703 23:07:34.691119   27242 provision.go:177] copyRemoteCerts
	I0703 23:07:34.691175   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:07:34.691195   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.693763   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.694129   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694311   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.694502   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.694665   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.694864   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:34.778514   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:07:34.778586   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:07:34.805663   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:07:34.805731   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:07:34.834448   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:07:34.834507   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:07:34.863423   27242 provision.go:87] duration metric: took 363.896644ms to configureAuth
	I0703 23:07:34.863450   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:07:34.863660   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:34.863743   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.866154   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866486   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.866518   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866663   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.866918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867093   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867227   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.867371   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.867582   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.867596   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:07:35.163731   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:07:35.163761   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:07:35.163770   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetURL
	I0703 23:07:35.165134   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using libvirt version 6000000
	I0703 23:07:35.167475   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.167858   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.167903   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.168131   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:07:35.168152   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:07:35.168160   27242 client.go:171] duration metric: took 28.327898073s to LocalClient.Create
	I0703 23:07:35.168185   27242 start.go:167] duration metric: took 28.327960056s to libmachine.API.Create "ha-856893"
	I0703 23:07:35.168196   27242 start.go:293] postStartSetup for "ha-856893-m03" (driver="kvm2")
	I0703 23:07:35.168208   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:07:35.168229   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.168465   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:07:35.168488   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.170847   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171220   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.171254   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171456   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.171671   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.171851   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.172018   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.255274   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:07:35.260351   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:07:35.260377   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:07:35.260467   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:07:35.260568   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:07:35.260583   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:07:35.260687   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:07:35.272083   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:35.299979   27242 start.go:296] duration metric: took 131.767901ms for postStartSetup
	I0703 23:07:35.300032   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:35.300664   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.303344   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.303779   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.303810   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.304247   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:35.304465   27242 start.go:128] duration metric: took 28.482160498s to createHost
	I0703 23:07:35.304487   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.307047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307392   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.307420   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307576   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.307798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308015   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308182   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.308380   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:35.308593   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:35.308607   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:07:35.420983   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720048055.401183800
	
	I0703 23:07:35.421004   27242 fix.go:216] guest clock: 1720048055.401183800
	I0703 23:07:35.421014   27242 fix.go:229] Guest: 2024-07-03 23:07:35.4011838 +0000 UTC Remote: 2024-07-03 23:07:35.304476938 +0000 UTC m=+166.034732868 (delta=96.706862ms)
	I0703 23:07:35.421033   27242 fix.go:200] guest clock delta is within tolerance: 96.706862ms
	I0703 23:07:35.421039   27242 start.go:83] releasing machines lock for "ha-856893-m03", held for 28.598837371s
	I0703 23:07:35.421065   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.421372   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.424018   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.424405   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.424434   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.426624   27242 out.go:177] * Found network options:
	I0703 23:07:35.427853   27242 out.go:177]   - NO_PROXY=192.168.39.172,192.168.39.157
	W0703 23:07:35.428985   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.429002   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.429017   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429617   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429822   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429928   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:07:35.429966   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	W0703 23:07:35.429991   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.430012   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.430073   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:07:35.430097   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.433231   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433599   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433639   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433738   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433819   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.433836   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.434034   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434104   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434184   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434316   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434344   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.434511   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.677657   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:07:35.684280   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:07:35.684340   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:07:35.700677   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:07:35.700696   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:07:35.700755   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:07:35.716908   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:07:35.731925   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:07:35.731993   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:07:35.747595   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:07:35.763296   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:07:35.878408   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:07:36.053007   27242 docker.go:233] disabling docker service ...
	I0703 23:07:36.053096   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:07:36.069537   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:07:36.084154   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:07:36.219803   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:07:36.349909   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:07:36.365327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:07:36.386397   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:07:36.386449   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.398525   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:07:36.398584   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.410492   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.422111   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.433451   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:07:36.445276   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.456898   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.477619   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.489825   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:07:36.501128   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:07:36.501191   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:07:36.516569   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:07:36.527341   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:36.659461   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:07:36.809855   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:07:36.809927   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:07:36.815110   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:07:36.815186   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:07:36.819348   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:07:36.866612   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:07:36.866700   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.896618   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.932621   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:07:36.933935   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:07:36.935273   27242 out.go:177]   - env NO_PROXY=192.168.39.172,192.168.39.157
	I0703 23:07:36.936545   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:36.939214   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939560   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:36.939587   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939811   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:07:36.944619   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:36.957968   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:07:36.958224   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:36.958474   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.958515   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.973765   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0703 23:07:36.974194   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.974697   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.974717   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.975026   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.975263   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:07:36.976873   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:36.977188   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.977223   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.992987   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0703 23:07:36.993384   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.993860   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.993887   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.994194   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.994378   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:36.994557   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.186
	I0703 23:07:36.994567   27242 certs.go:194] generating shared ca certs ...
	I0703 23:07:36.994580   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:36.994707   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:07:36.994743   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:07:36.994752   27242 certs.go:256] generating profile certs ...
	I0703 23:07:36.994817   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:07:36.994840   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228
	I0703 23:07:36.994854   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.186 192.168.39.254]
	I0703 23:07:37.337183   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 ...
	I0703 23:07:37.337219   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228: {Name:mk67b34580ae56e313e039e356b49a596df2616e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337409   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 ...
	I0703 23:07:37.337428   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228: {Name:mk926f699ebfb8cd1cc65b70f9375a71b834773b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337526   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:07:37.337675   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:07:37.337825   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:07:37.337842   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:07:37.337858   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:07:37.337874   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:07:37.337893   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:07:37.337911   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:07:37.337929   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:07:37.337945   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:07:37.337962   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:07:37.338026   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:07:37.338066   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:07:37.338079   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:07:37.338112   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:07:37.338144   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:07:37.338183   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:07:37.338236   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:37.338272   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:07:37.338293   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.338311   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:37.338353   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:37.341309   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341713   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:37.341753   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341942   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:37.342152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:37.342311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:37.342478   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:37.416222   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:07:37.421398   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:07:37.433219   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:07:37.438229   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:07:37.450051   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:07:37.454475   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:07:37.465922   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:07:37.470453   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:07:37.482305   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:07:37.486680   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:07:37.498268   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:07:37.503288   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:07:37.515695   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:07:37.543420   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:07:37.571775   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:07:37.601487   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:07:37.630721   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0703 23:07:37.665301   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:07:37.692166   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:07:37.719787   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:07:37.751460   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:07:37.778803   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:07:37.805997   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:07:37.832086   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:07:37.850763   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:07:37.869670   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:07:37.888584   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:07:37.906796   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:07:37.924790   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:07:37.943082   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:07:37.963450   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:07:37.970013   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:07:37.981740   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986778   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986831   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.993242   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:07:38.004656   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:07:38.016695   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021674   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021728   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.027634   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:07:38.039118   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:07:38.050655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055464   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055548   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.061625   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:07:38.073265   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:07:38.078693   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:07:38.078753   27242 kubeadm.go:928] updating node {m03 192.168.39.186 8443 v1.30.2 crio true true} ...
	I0703 23:07:38.078862   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:07:38.078895   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:07:38.078937   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:07:38.096141   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:07:38.096245   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:07:38.096299   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.107262   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:07:38.107316   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.118852   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0703 23:07:38.118915   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:38.118922   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:07:38.118857   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0703 23:07:38.118960   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.119033   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.118941   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.119135   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.137934   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.137967   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:07:38.137996   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:07:38.137999   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:07:38.138014   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:07:38.138057   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.149338   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:07:38.149380   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:07:39.190629   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:07:39.200854   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:07:39.219472   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:07:39.238369   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:07:39.256931   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:07:39.261281   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:39.275182   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:39.397746   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:07:39.415272   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:39.415637   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:39.415672   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:39.432698   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0703 23:07:39.433090   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:39.433538   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:39.433562   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:39.433859   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:39.434046   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:39.434186   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:07:39.434327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:07:39.434341   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:39.437296   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437726   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:39.437760   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437962   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:39.438140   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:39.438348   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:39.438503   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:39.593405   27242 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:39.593461   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I0703 23:08:02.813599   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (23.220101132s)
	I0703 23:08:02.813663   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:08:03.385422   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m03 minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:08:03.515792   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:08:03.619588   27242 start.go:318] duration metric: took 24.185396632s to joinCluster
	I0703 23:08:03.619710   27242 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:08:03.620031   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:08:03.621348   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:08:03.622685   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:08:03.881282   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:08:03.907961   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:08:03.908243   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:08:03.908323   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:08:03.908583   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:03.908688   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:03.908697   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:03.908707   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:03.908713   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:03.912712   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:04.408879   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.408907   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.408919   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.408925   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.414154   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:04.909645   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.909672   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.909683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.909689   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.914163   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.409099   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.409119   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.409127   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.409131   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.413290   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.908819   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.908842   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.908849   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.908853   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.913655   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.914382   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:06.409134   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.409160   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.409170   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.409175   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.412666   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:06.909606   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.909627   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.909637   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.909645   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.913376   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:07.409370   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.409394   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.409408   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.409414   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.416499   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:07.909141   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.909171   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.909181   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.909186   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.914036   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:07.914974   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:08.409386   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.409412   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.409423   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.409441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.413022   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:08.909609   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.909634   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.909646   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.909651   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.913449   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:09.409635   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.409658   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.409669   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.409675   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.413889   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:09.909448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.909468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.909477   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.909482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.913589   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:10.409105   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.409125   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.409134   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.409139   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.412940   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.413603   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:10.909037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.909064   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.909075   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.909081   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916194   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:10.916783   27242 node_ready.go:49] node "ha-856893-m03" has status "Ready":"True"
	I0703 23:08:10.916802   27242 node_ready.go:38] duration metric: took 7.008205065s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:10.916818   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:10.916888   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:10.916897   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.916904   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916912   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.923686   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:10.929901   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.930006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:08:10.930018   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.930028   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.930034   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.933138   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.933987   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.934003   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.934020   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.934026   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.937163   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.937765   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.937784   27242 pod_ready.go:81] duration metric: took 7.857453ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937795   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937850   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:08:10.937858   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.937865   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.937872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.940806   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.941415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.941431   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.941441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.941446   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.944345   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.944919   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.944938   27242 pod_ready.go:81] duration metric: took 7.136212ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944947   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944993   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:08:10.945001   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.945008   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.945011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.947818   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.948517   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.948534   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.948544   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.948552   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.951211   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.951848   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.951863   27242 pod_ready.go:81] duration metric: took 6.910613ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951888   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951954   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:08:10.951965   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.951974   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.951980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.954591   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.955176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:10.955193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.955202   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.955208   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.957501   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.958008   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.958025   27242 pod_ready.go:81] duration metric: took 6.129203ms for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.958033   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:11.109948   27242 request.go:629] Waited for 151.854764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110047   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.110057   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.110067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.115838   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.309816   27242 request.go:629] Waited for 193.188796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.309886   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.309892   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.313593   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.509365   27242 request.go:629] Waited for 50.202967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509477   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.509489   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.509500   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.514572   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.709248   27242 request.go:629] Waited for 193.32848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709299   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709304   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.709325   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.709333   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.713036   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.959125   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.959147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.959155   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.959160   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.963102   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.109001   27242 request.go:629] Waited for 144.798659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109057   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109062   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.109071   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.109077   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.112847   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.458780   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.458804   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.458816   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.458822   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.462522   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.509515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.509539   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.509550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.509556   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.513776   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.958862   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.958884   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.958892   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.958896   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.963076   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.964032   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.964055   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.964066   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.964072   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.967555   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.968207   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:13.458279   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.458306   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.458322   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.458327   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.461824   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:13.462472   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.462489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.462497   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.462506   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.465331   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:13.958289   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.958310   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.958318   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.958324   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.962681   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:13.963320   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.963333   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.963340   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.963344   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.966600   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.458259   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.458282   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.458290   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.458293   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.462012   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.462555   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.462570   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.462577   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.462581   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.465499   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:14.959177   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.959199   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.959207   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.959212   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.962396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.963280   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.963296   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.963304   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.963309   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.966765   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.459098   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.459127   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.459137   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.459142   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.462880   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.463536   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.463554   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.463565   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.463573   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.466897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.467438   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:15.958824   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.958850   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.958862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.958870   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.964122   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:15.964870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.964888   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.964896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.964900   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.967828   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.459240   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:16.459265   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.459275   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.459283   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.462430   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.463285   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.463301   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.463308   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.463312   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.466431   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.467055   27242 pod_ready.go:92] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.467074   27242 pod_ready.go:81] duration metric: took 5.509032519s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467090   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467139   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:08:16.467147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.467154   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.467159   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470113   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.470753   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:16.470768   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.470775   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470781   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.479436   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:16.479957   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.479976   27242 pod_ready.go:81] duration metric: took 12.880584ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.479986   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.480043   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:08:16.480051   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.480058   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.480068   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.483359   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.509453   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:16.509489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.509499   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.509506   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.514051   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.514499   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.514518   27242 pod_ready.go:81] duration metric: took 34.526271ms for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.514527   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.709759   27242 request.go:629] Waited for 195.170406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709834   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709841   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.709851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.709858   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.714113   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.909343   27242 request.go:629] Waited for 194.383103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909408   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909416   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.909426   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.909432   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.912650   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.913346   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.913369   27242 pod_ready.go:81] duration metric: took 398.834831ms for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.913384   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.109258   27242 request.go:629] Waited for 195.812463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109335   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109344   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.109351   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.109360   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.113410   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.309479   27242 request.go:629] Waited for 195.262429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309542   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309551   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.309559   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.309563   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.313791   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.314385   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.314404   27242 pod_ready.go:81] duration metric: took 401.012331ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.314414   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.509531   27242 request.go:629] Waited for 195.056137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509605   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509611   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.509620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.509625   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.513357   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.709477   27242 request.go:629] Waited for 195.370636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709535   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709542   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.709553   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.709564   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.713345   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.713850   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.713874   27242 pod_ready.go:81] duration metric: took 399.45315ms for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.713889   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.909947   27242 request.go:629] Waited for 195.968544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910018   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.910030   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.910037   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.913897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.109846   27242 request.go:629] Waited for 195.376393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109896   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109901   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.109910   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.109916   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.113762   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.309532   27242 request.go:629] Waited for 95.294007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309604   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309616   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.309631   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.309641   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.313751   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.509885   27242 request.go:629] Waited for 195.399896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509978   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509991   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.510000   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.510009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.514418   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.714234   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.714255   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.714263   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.714266   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.717923   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.909739   27242 request.go:629] Waited for 191.248143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909790   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909795   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.909801   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.909804   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.916518   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:19.214106   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.214126   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.214134   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.214139   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.217700   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.309750   27242 request.go:629] Waited for 91.33378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309811   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309818   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.309827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.309832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.314568   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:19.714371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.714395   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.714403   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.714407   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.717735   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.718452   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.718468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.718475   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.718480   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.722349   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.722906   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:19.722923   27242 pod_ready.go:81] duration metric: took 2.009027669s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.722933   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.909367   27242 request.go:629] Waited for 186.370383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909471   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.909482   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.909487   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.913236   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.109762   27242 request.go:629] Waited for 195.344765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109853   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109861   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.109872   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.109883   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.114021   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.114608   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.114627   27242 pod_ready.go:81] duration metric: took 391.688117ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.114636   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.309372   27242 request.go:629] Waited for 194.665348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309436   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309446   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.309454   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.309462   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.313429   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.509612   27242 request.go:629] Waited for 195.389962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509670   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509676   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.509683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.509687   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.513278   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.513970   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.513988   27242 pod_ready.go:81] duration metric: took 399.344201ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.514002   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.710051   27242 request.go:629] Waited for 195.979482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710148   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710158   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.710166   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.710170   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.714583   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.909948   27242 request.go:629] Waited for 194.287257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910011   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.910018   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.910023   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.913833   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.914294   27242 pod_ready.go:92] pod "kube-proxy-stq26" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.914312   27242 pod_ready.go:81] duration metric: took 400.304119ms for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.914322   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.109389   27242 request.go:629] Waited for 194.990561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109469   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.109482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.109488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.114937   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:21.309870   27242 request.go:629] Waited for 194.409083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309938   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309944   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.309951   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.309956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.314789   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:21.315856   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.315905   27242 pod_ready.go:81] duration metric: took 401.575237ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.315918   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.509959   27242 request.go:629] Waited for 193.98282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510017   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.510033   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.510039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.513857   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.709794   27242 request.go:629] Waited for 195.374395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709856   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709863   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.709888   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.709893   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.713692   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.714469   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.714501   27242 pod_ready.go:81] duration metric: took 398.575885ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.714514   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.909971   27242 request.go:629] Waited for 195.381878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910060   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910068   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.910078   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.910085   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.914034   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.109540   27242 request.go:629] Waited for 194.902506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109621   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109629   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.109638   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.109644   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.113703   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.114348   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:22.114368   27242 pod_ready.go:81] duration metric: took 399.84796ms for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:22.114380   27242 pod_ready.go:38] duration metric: took 11.197545891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:22.114405   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:08:22.114465   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:08:22.132505   27242 api_server.go:72] duration metric: took 18.512751964s to wait for apiserver process to appear ...
	I0703 23:08:22.132533   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:08:22.132561   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:08:22.137340   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:08:22.137434   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:08:22.137445   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.137453   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.137457   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.138593   27242 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0703 23:08:22.138733   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:08:22.138758   27242 api_server.go:131] duration metric: took 6.217378ms to wait for apiserver health ...
	I0703 23:08:22.138774   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:08:22.309132   27242 request.go:629] Waited for 170.284558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309188   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.309200   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.309204   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.317229   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:22.325849   27242 system_pods.go:59] 24 kube-system pods found
	I0703 23:08:22.325890   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.325895   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.325899   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.325902   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.325906   27242 system_pods.go:61] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.325909   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.325912   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.325914   27242 system_pods.go:61] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.325917   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.325920   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.325924   27242 system_pods.go:61] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.325927   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.325930   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.325933   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.325936   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.325940   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.325943   27242 system_pods.go:61] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.325946   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.325949   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.325952   27242 system_pods.go:61] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.325954   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.325958   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.325960   27242 system_pods.go:61] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.325963   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.325970   27242 system_pods.go:74] duration metric: took 187.186303ms to wait for pod list to return data ...
	I0703 23:08:22.325985   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:08:22.509121   27242 request.go:629] Waited for 183.060695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509193   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509200   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.509210   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.509218   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.512726   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.512854   27242 default_sa.go:45] found service account: "default"
	I0703 23:08:22.512879   27242 default_sa.go:55] duration metric: took 186.885116ms for default service account to be created ...
	I0703 23:08:22.512891   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:08:22.709312   27242 request.go:629] Waited for 196.355099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709392   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709401   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.709415   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.709425   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.717218   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:22.725427   27242 system_pods.go:86] 24 kube-system pods found
	I0703 23:08:22.725459   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.725465   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.725470   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.725474   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.725478   27242 system_pods.go:89] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.725481   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.725485   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.725489   27242 system_pods.go:89] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.725494   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.725498   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.725502   27242 system_pods.go:89] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.725506   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.725510   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.725515   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.725519   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.725523   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.725526   27242 system_pods.go:89] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.725530   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.725535   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.725539   27242 system_pods.go:89] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.725546   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.725549   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.725552   27242 system_pods.go:89] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.725556   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.725561   27242 system_pods.go:126] duration metric: took 212.662262ms to wait for k8s-apps to be running ...
	I0703 23:08:22.725571   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:08:22.725617   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:08:22.742416   27242 system_svc.go:56] duration metric: took 16.833939ms WaitForService to wait for kubelet
	I0703 23:08:22.742456   27242 kubeadm.go:576] duration metric: took 19.122705878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:08:22.742497   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:08:22.909819   27242 request.go:629] Waited for 167.220159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.909886   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.909890   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.914023   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.915479   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915513   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915537   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915544   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915548   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915554   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915559   27242 node_conditions.go:105] duration metric: took 173.056283ms to run NodePressure ...
	I0703 23:08:22.915576   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:08:22.915610   27242 start.go:254] writing updated cluster config ...
	I0703 23:08:22.916020   27242 ssh_runner.go:195] Run: rm -f paused
	I0703 23:08:22.974944   27242 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 23:08:22.976700   27242 out.go:177] * Done! kubectl is now configured to use "ha-856893" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.652111336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fce887b-6b8c-4647-919f-0d70853f8fad name=/runtime.v1.RuntimeService/Version
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.656323739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5966ca2-3d00-4887-b23a-225218bc5c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.657429299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048315657402445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5966ca2-3d00-4887-b23a-225218bc5c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.658107996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebb4aa22-9394-4701-8906-41f2c28e67e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.658168613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebb4aa22-9394-4701-8906-41f2c28e67e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.658405049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebb4aa22-9394-4701-8906-41f2c28e67e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.700660175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=793c9e84-c68c-4936-a301-1992e0bb8bba name=/runtime.v1.RuntimeService/Version
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.700874196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=793c9e84-c68c-4936-a301-1992e0bb8bba name=/runtime.v1.RuntimeService/Version
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.701980439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9618d4d0-4370-4767-a812-7aab2bef03c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.702425752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048315702397749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9618d4d0-4370-4767-a812-7aab2bef03c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.703386242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7122cc7-69f4-41dc-a556-155fcf4a4a68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.703562131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7122cc7-69f4-41dc-a556-155fcf4a4a68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.704612526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7122cc7-69f4-41dc-a556-155fcf4a4a68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.735155868Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5742cffe-00e9-4777-be38-5adca8df77ba name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.735473455Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-hh5rx,Uid:1e907d89-dcf0-4e2d-bf2d-812d38932e86,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720048104233139890,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:08:23.913685312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1720047972969197909,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-03T23:06:12.652722003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-pwqfl,Uid:b4d22edf-e718-4755-b211-c8279481005e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047972964393081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:06:12.651143206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-n5tdf,Uid:8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1720047972946417587,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:06:12.638085253Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&PodSandboxMetadata{Name:kube-proxy-52zqj,Uid:7cbc16d2-e9f6-487f-a974-0fa21e4163b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047942356269173,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-03T23:05:40.549322907Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&PodSandboxMetadata{Name:kindnet-h7ntk,Uid:18e6d992-2713-4399-a160-5f9196981f26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047940843924353,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:05:40.529595180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-856893,Uid:fb5af725f761355c024282f684e2eaaf,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1720047921366901653,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{kubernetes.io/config.hash: fb5af725f761355c024282f684e2eaaf,kubernetes.io/config.seen: 2024-07-03T23:05:20.828431683Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-856893,Uid:f238ffd8748e557f239482399bf89dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047921362349137,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: f238ffd8748e557f239482399bf89dc9,kubernetes.io/config.seen: 2024-07-03T23:05:20.828345335Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-856893,Uid:58ac71ae3fd52dff19d913e1a274c990,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047921341886802,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58ac71ae3fd52dff19d913e1a274c990,kubernetes.io/config.seen: 2024-07-03T23:05:20.828346426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&PodSandboxMetadata{Name:etcd-ha-856893,Uid:7891a98d
b30710828591ae5169d05ec2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047921314993700,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.172:2379,kubernetes.io/config.hash: 7891a98db30710828591ae5169d05ec2,kubernetes.io/config.seen: 2024-07-03T23:05:20.828334783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-856893,Uid:18fee9f6b7b1f394539107bfaf70ec2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720047921309051403,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.172:8443,kubernetes.io/config.hash: 18fee9f6b7b1f394539107bfaf70ec2c,kubernetes.io/config.seen: 2024-07-03T23:05:20.828343999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5742cffe-00e9-4777-be38-5adca8df77ba name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.736228971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b688cac0-2ac3-45b2-a6e6-40a91afe400b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.736313335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b688cac0-2ac3-45b2-a6e6-40a91afe400b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.736568585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b688cac0-2ac3-45b2-a6e6-40a91afe400b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.753403374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5637afc1-d4e6-458f-a059-51471a029ec2 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.753496356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5637afc1-d4e6-458f-a059-51471a029ec2 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.755139807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=101c025c-6b0a-4da4-9c61-7ea235226c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.755905005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048315755877258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=101c025c-6b0a-4da4-9c61-7ea235226c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.756526136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a848dd-9ef4-4e75-82ec-095c35ba2b2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.756602378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a848dd-9ef4-4e75-82ec-095c35ba2b2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:11:55 ha-856893 crio[680]: time="2024-07-03 23:11:55.756932654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a848dd-9ef4-4e75-82ec-095c35ba2b2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d5f2f09a864e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2add57c6feb6d       busybox-fc5497c4f-hh5rx
	4b327b3ea68a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   52adb03e9908b       coredns-7db6d8ff4d-n5tdf
	ebac8426f222e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   75824b8079291       coredns-7db6d8ff4d-pwqfl
	e5e953066d642       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b1df838b768ef       storage-provisioner
	aea86e5699e84       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   17315e93de095       kube-proxy-52zqj
	7a5bd1ae2892a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      6 minutes ago       Running             kindnet-cni               0                   fcb5b2ab8ad58       kindnet-h7ntk
	4c81f0becbc3b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   ade6e7c92cc82       kube-vip-ha-856893
	227a9a4176778       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   78f6147e8fcf3       kube-controller-manager-ha-856893
	8ed8443e8784d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   a50d015125505       kube-scheduler-ha-856893
	194253df10dfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   bbcc0c1ac6390       etcd-ha-856893
	4c379ddaf9a49       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   3f446507b3eb8       kube-apiserver-ha-856893
	
	
	==> coredns [4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54] <==
	[INFO] 10.244.0.4:50532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072272s
	[INFO] 10.244.0.4:38183 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100508s
	[INFO] 10.244.0.4:40014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049781s
	[INFO] 10.244.1.2:43357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134408s
	[INFO] 10.244.1.2:33336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000185s
	[INFO] 10.244.1.2:43589 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174137s
	[INFO] 10.244.1.2:49376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106729s
	[INFO] 10.244.1.2:51691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271033s
	[INFO] 10.244.2.2:40310 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117383s
	[INFO] 10.244.2.2:38408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011442s
	[INFO] 10.244.2.2:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080741s
	[INFO] 10.244.0.4:60751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020875s
	[INFO] 10.244.0.4:42746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083559s
	[INFO] 10.244.1.2:46618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026488s
	[INFO] 10.244.1.2:46816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095128s
	[INFO] 10.244.2.2:35755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141347s
	[INFO] 10.244.2.2:37226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000441904s
	[INFO] 10.244.2.2:56990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123934s
	[INFO] 10.244.0.4:33260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228783s
	[INFO] 10.244.0.4:40825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089557s
	[INFO] 10.244.0.4:36029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284159s
	[INFO] 10.244.0.4:38025 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069908s
	[INFO] 10.244.1.2:33505 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000516657s
	[INFO] 10.244.1.2:51760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106766s
	[INFO] 10.244.1.2:48924 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111713s
	
	
	==> coredns [ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41451 - 39576 "HINFO IN 3941637866052819197.8807026029404487185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013851694s
	[INFO] 10.244.2.2:52714 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014862182s
	[INFO] 10.244.0.4:48924 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001898144s
	[INFO] 10.244.1.2:38357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235864s
	[INFO] 10.244.1.2:52654 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000207162s
	[INFO] 10.244.2.2:38149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003994489s
	[INFO] 10.244.2.2:37323 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162805s
	[INFO] 10.244.2.2:37370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170597s
	[INFO] 10.244.0.4:39154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140397s
	[INFO] 10.244.0.4:39807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002148429s
	[INFO] 10.244.0.4:52421 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189952s
	[INFO] 10.244.0.4:32927 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001716905s
	[INFO] 10.244.0.4:37077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064503s
	[INFO] 10.244.1.2:53622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138056s
	[INFO] 10.244.1.2:56863 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001413025s
	[INFO] 10.244.1.2:33669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000289179s
	[INFO] 10.244.2.2:46390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141967s
	[INFO] 10.244.0.4:47937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126136s
	[INFO] 10.244.0.4:40258 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058689s
	[INFO] 10.244.1.2:34579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112137s
	[INFO] 10.244.1.2:43318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087441s
	[INFO] 10.244.2.2:44839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154015s
	[INFO] 10.244.1.2:49628 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158345s
	
	
	==> describe nodes <==
	Name:               ha-856893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-856893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26831b612bd459ca285f71afd0636da
	  System UUID:                a26831b6-12bd-459c-a285-f71afd0636da
	  Boot ID:                    60d1e076-9358-4d45-bf73-662df78ab1a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hh5rx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-7db6d8ff4d-n5tdf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m16s
	  kube-system                 coredns-7db6d8ff4d-pwqfl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m15s
	  kube-system                 etcd-ha-856893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 kindnet-h7ntk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m16s
	  kube-system                 kube-apiserver-ha-856893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-controller-manager-ha-856893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-proxy-52zqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-scheduler-ha-856893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-vip-ha-856893                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m29s                  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s                  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s                  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  NodeReady                5m44s                  kubelet          Node ha-856893 status is now: NodeReady
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	
	
	Name:               ha-856893-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:06:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:09:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-856893-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 109978f2ea4c4f42a5d187826750c850
	  System UUID:                109978f2-ea4c-4f42-a5d1-87826750c850
	  Boot ID:                    994539c8-7107-4cbf-a682-2c196e1b4de5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n7rvj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-856893-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-rwqsq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m12s
	  kube-system                 kube-apiserver-ha-856893-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-ha-856893-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-proxy-gkwrn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-scheduler-ha-856893-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-vip-ha-856893-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-856893-m02 status is now: NodeNotReady
	
	
	Name:               ha-856893-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:07:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:08:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-856893-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1e4eaaaf3da41a390e7e93c4c9b6dd0
	  System UUID:                a1e4eaaa-f3da-41a3-90e7-e93c4c9b6dd0
	  Boot ID:                    714f8b3c-0219-40be-b96e-5e103d064c96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bt646                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-856893-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-vtd2b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-856893-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ha-856893-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-stq26                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-856893-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-856893-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m57s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m57s)  kubelet          Node ha-856893-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x7 over 3m57s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	
	
	Name:               ha-856893-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-856893-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3705f72ac66415f90e310971654b6b5
	  System UUID:                f3705f72-ac66-415f-90e3-10971654b6b5
	  Boot ID:                    b99153db-d083-4d53-8f7d-792d32c1186e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5kksq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-brfsv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-856893-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 3 23:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050985] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.593398] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.343269] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jul 3 23:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.908066] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.058276] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065122] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.220079] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.126395] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.300940] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.506884] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.061467] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.368826] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.919640] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.254448] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +6.249182] kauditd_printk_skb: 23 callbacks suppressed
	[Jul 3 23:06] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.915119] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb] <==
	{"level":"warn","ts":"2024-07-03T23:11:56.082402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.093797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.104182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.105322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.112304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.116809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.129002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.137223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.145613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.151048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.155148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.169466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.170717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.182017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.18281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.184123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.187815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.190464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.194684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.200379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.203142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.211515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.220575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.228893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:11:56.304016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:11:56 up 7 min,  0 users,  load average: 0.05, 0.18, 0.09
	Linux ha-856893 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71] <==
	I0703 23:11:22.544858       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:32.556309       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:32.556392       1 main.go:227] handling current node
	I0703 23:11:32.556416       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:32.556433       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:32.556566       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:32.556590       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:32.556656       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:32.556674       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:42.565479       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:42.565528       1 main.go:227] handling current node
	I0703 23:11:42.565539       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:42.565544       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:42.565649       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:42.565675       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:42.565718       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:42.565783       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:52.579240       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:52.579293       1 main.go:227] handling current node
	I0703 23:11:52.579307       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:52.579313       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:52.579448       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:52.579453       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:52.579500       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:52.579559       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112] <==
	I0703 23:05:27.803513       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:05:27.827801       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0703 23:05:27.842963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:05:40.487913       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0703 23:05:40.891672       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0703 23:06:47.177553       1 trace.go:236] Trace[1646404756]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d9eabe84-be40-4221-b01e-53771880f05a,client:192.168.39.157,api-group:,api-version:v1,name:kube-apiserver-ha-856893-m02,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02/status,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:PATCH (03-Jul-2024 23:06:46.675) (total time: 501ms):
	Trace[1646404756]: ["GuaranteedUpdate etcd3" audit-id:d9eabe84-be40-4221-b01e-53771880f05a,key:/pods/kube-system/kube-apiserver-ha-856893-m02,type:*core.Pod,resource:pods 501ms (23:06:46.675)
	Trace[1646404756]:  ---"Txn call completed" 498ms (23:06:47.176)]
	Trace[1646404756]: ---"Object stored in database" 499ms (23:06:47.176)
	Trace[1646404756]: [501.97274ms] [501.97274ms] END
	E0703 23:08:29.714542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55146: use of closed network connection
	E0703 23:08:29.907245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55158: use of closed network connection
	E0703 23:08:30.109154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55168: use of closed network connection
	E0703 23:08:30.308595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55182: use of closed network connection
	E0703 23:08:30.506637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55202: use of closed network connection
	E0703 23:08:30.710449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55214: use of closed network connection
	E0703 23:08:30.897088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55238: use of closed network connection
	E0703 23:08:31.115623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55252: use of closed network connection
	E0703 23:08:31.340432       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55272: use of closed network connection
	E0703 23:08:31.646395       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55278: use of closed network connection
	E0703 23:08:31.818268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43524: use of closed network connection
	E0703 23:08:32.008938       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43550: use of closed network connection
	E0703 23:08:32.189914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43564: use of closed network connection
	E0703 23:08:32.384321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43578: use of closed network connection
	E0703 23:08:32.569307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43590: use of closed network connection
	
	
	==> kube-controller-manager [227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e] <==
	I0703 23:07:59.982610       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:08:00.071837       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m03"
	I0703 23:08:23.943216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.815373ms"
	I0703 23:08:23.984267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.217424ms"
	I0703 23:08:24.186908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.578607ms"
	I0703 23:08:24.331407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.441291ms"
	I0703 23:08:24.387233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.761398ms"
	I0703 23:08:24.387349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.305µs"
	I0703 23:08:24.611572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.619µs"
	I0703 23:08:27.553339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.523436ms"
	I0703 23:08:27.553458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.361µs"
	I0703 23:08:28.204821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.616391ms"
	I0703 23:08:28.204953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.102µs"
	I0703 23:08:28.262137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.537027ms"
	I0703 23:08:28.262534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.007µs"
	I0703 23:08:29.243362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.267489ms"
	I0703 23:08:29.245302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.378µs"
	E0703 23:09:04.446073       1 certificate_controller.go:146] Sync csr-nzk25 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-nzk25": the object has been modified; please apply your changes to the latest version and try again
	I0703 23:09:04.725986       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-856893-m04\" does not exist"
	I0703 23:09:04.781392       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m04" podCIDRs=["10.244.3.0/24"]
	I0703 23:09:05.083864       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m04"
	I0703 23:09:14.677798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.604690       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.783473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.605177ms"
	I0703 23:10:08.783650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.542µs"
	
	
	==> kube-proxy [aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599] <==
	I0703 23:05:42.648241       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:05:42.660274       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	I0703 23:05:42.701292       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:05:42.701358       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:05:42.701376       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:05:42.704275       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:05:42.704524       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:05:42.704553       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:05:42.708143       1 config.go:192] "Starting service config controller"
	I0703 23:05:42.708177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:05:42.708224       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:05:42.708246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:05:42.708724       1 config.go:319] "Starting node config controller"
	I0703 23:05:42.708810       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:05:42.808474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:05:42.809810       1 shared_informer.go:320] Caches are synced for node config
	I0703 23:05:42.809889       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0] <==
	W0703 23:05:24.434535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:05:24.434550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:05:25.261863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.261999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.269112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:05:25.269265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:05:25.278628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:05:25.279108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:05:25.396201       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:05:25.396448       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:05:25.396683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:05:25.396721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:05:25.414377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:05:25.414670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:05:25.429406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.429583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.523495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.523643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.721665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 23:05:25.721726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0703 23:05:27.616231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:08:23.941598       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	E0703 23:08:23.941843       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4ffbc91d-86d2-4096-8592-d570ee95c514(default/busybox-fc5497c4f-bt646) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-bt646"
	E0703 23:08:23.941901       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" pod="default/busybox-fc5497c4f-bt646"
	I0703 23:08:23.941955       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	
	
	==> kubelet <==
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:07:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.915134    1363 topology_manager.go:215] "Topology Admit Handler" podUID="1e907d89-dcf0-4e2d-bf2d-812d38932e86" podNamespace="default" podName="busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.944135    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7w4\" (UniqueName: \"kubernetes.io/projected/1e907d89-dcf0-4e2d-bf2d-812d38932e86-kube-api-access-5b7w4\") pod \"busybox-fc5497c4f-hh5rx\" (UID: \"1e907d89-dcf0-4e2d-bf2d-812d38932e86\") " pod="default/busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:27 ha-856893 kubelet[1363]: E0703 23:08:27.752219    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:09:27 ha-856893 kubelet[1363]: E0703 23:09:27.751305    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:10:27 ha-856893 kubelet[1363]: E0703 23:10:27.755235    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:11:27 ha-856893 kubelet[1363]: E0703 23:11:27.756589    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-856893 -n ha-856893
helpers_test.go:261: (dbg) Run:  kubectl --context ha-856893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.407068124s)
ha_test.go:413: expected profile "ha-856893" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-856893\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-856893\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-856893\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.172\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.157\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.186\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.195\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fals
e,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Mount
IP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-856893 -n ha-856893
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 logs -n 25: (1.5125122s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m03_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m04 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp testdata/cp-test.txt                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m03 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-856893 node stop m02 -v=7                                                    | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:04:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:04:49.303938   27242 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:04:49.304205   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304217   27242 out.go:304] Setting ErrFile to fd 2...
	I0703 23:04:49.304221   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304418   27242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:04:49.304993   27242 out.go:298] Setting JSON to false
	I0703 23:04:49.305930   27242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2829,"bootTime":1720045060,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:04:49.305987   27242 start.go:139] virtualization: kvm guest
	I0703 23:04:49.308231   27242 out.go:177] * [ha-856893] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:04:49.309607   27242 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:04:49.309635   27242 notify.go:220] Checking for updates...
	I0703 23:04:49.312119   27242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:04:49.313313   27242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:04:49.314518   27242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.315705   27242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:04:49.316858   27242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:04:49.318260   27242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:04:49.353555   27242 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:04:49.354873   27242 start.go:297] selected driver: kvm2
	I0703 23:04:49.354888   27242 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:04:49.354902   27242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:04:49.355866   27242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.355965   27242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:04:49.371321   27242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:04:49.371369   27242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:04:49.371558   27242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:04:49.371586   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:04:49.371590   27242 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0703 23:04:49.371596   27242 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 23:04:49.371647   27242 start.go:340] cluster config:
	{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0703 23:04:49.371752   27242 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.373469   27242 out.go:177] * Starting "ha-856893" primary control-plane node in "ha-856893" cluster
	I0703 23:04:49.374783   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:04:49.374822   27242 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:04:49.374831   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:04:49.374914   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:04:49.374925   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:04:49.375209   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:04:49.375227   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json: {Name:mkf45f45e81b9e1937bda66f4e2b577ad75b58d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:04:49.375355   27242 start.go:360] acquireMachinesLock for ha-856893: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:04:49.375381   27242 start.go:364] duration metric: took 13.613µs to acquireMachinesLock for "ha-856893"
	I0703 23:04:49.375397   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:04:49.375447   27242 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:04:49.377146   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:04:49.377284   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:49.377347   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:49.391658   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0703 23:04:49.392204   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:49.392806   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:04:49.392829   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:49.393132   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:49.393315   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:04:49.393456   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:04:49.393665   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:04:49.393703   27242 client.go:168] LocalClient.Create starting
	I0703 23:04:49.393738   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:04:49.393776   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393790   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393832   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:04:49.393849   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393861   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393879   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:04:49.393887   27242 main.go:141] libmachine: (ha-856893) Calling .PreCreateCheck
	I0703 23:04:49.394261   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:04:49.394643   27242 main.go:141] libmachine: Creating machine...
	I0703 23:04:49.394655   27242 main.go:141] libmachine: (ha-856893) Calling .Create
	I0703 23:04:49.394757   27242 main.go:141] libmachine: (ha-856893) Creating KVM machine...
	I0703 23:04:49.395897   27242 main.go:141] libmachine: (ha-856893) DBG | found existing default KVM network
	I0703 23:04:49.396588   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.396439   27265 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0703 23:04:49.396611   27242 main.go:141] libmachine: (ha-856893) DBG | created network xml: 
	I0703 23:04:49.396624   27242 main.go:141] libmachine: (ha-856893) DBG | <network>
	I0703 23:04:49.396638   27242 main.go:141] libmachine: (ha-856893) DBG |   <name>mk-ha-856893</name>
	I0703 23:04:49.396648   27242 main.go:141] libmachine: (ha-856893) DBG |   <dns enable='no'/>
	I0703 23:04:49.396658   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396672   27242 main.go:141] libmachine: (ha-856893) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 23:04:49.396682   27242 main.go:141] libmachine: (ha-856893) DBG |     <dhcp>
	I0703 23:04:49.396695   27242 main.go:141] libmachine: (ha-856893) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 23:04:49.396705   27242 main.go:141] libmachine: (ha-856893) DBG |     </dhcp>
	I0703 23:04:49.396713   27242 main.go:141] libmachine: (ha-856893) DBG |   </ip>
	I0703 23:04:49.396722   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396747   27242 main.go:141] libmachine: (ha-856893) DBG | </network>
	I0703 23:04:49.396767   27242 main.go:141] libmachine: (ha-856893) DBG | 
	I0703 23:04:49.401937   27242 main.go:141] libmachine: (ha-856893) DBG | trying to create private KVM network mk-ha-856893 192.168.39.0/24...
	I0703 23:04:49.466045   27242 main.go:141] libmachine: (ha-856893) DBG | private KVM network mk-ha-856893 192.168.39.0/24 created
	I0703 23:04:49.466078   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.465979   27265 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.466090   27242 main.go:141] libmachine: (ha-856893) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.466112   27242 main.go:141] libmachine: (ha-856893) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:04:49.466139   27242 main.go:141] libmachine: (ha-856893) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:04:49.697240   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.697136   27265 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa...
	I0703 23:04:49.882712   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882599   27265 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk...
	I0703 23:04:49.882738   27242 main.go:141] libmachine: (ha-856893) DBG | Writing magic tar header
	I0703 23:04:49.882748   27242 main.go:141] libmachine: (ha-856893) DBG | Writing SSH key tar header
	I0703 23:04:49.882772   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882735   27265 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.882887   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893
	I0703 23:04:49.882920   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 (perms=drwx------)
	I0703 23:04:49.882933   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:04:49.882948   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.882958   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:04:49.882966   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:04:49.882975   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:04:49.882984   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:04:49.882994   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:04:49.882999   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home
	I0703 23:04:49.883009   27242 main.go:141] libmachine: (ha-856893) DBG | Skipping /home - not owner
	I0703 23:04:49.883025   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:04:49.883039   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:04:49.883051   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:04:49.883062   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:49.884190   27242 main.go:141] libmachine: (ha-856893) define libvirt domain using xml: 
	I0703 23:04:49.884219   27242 main.go:141] libmachine: (ha-856893) <domain type='kvm'>
	I0703 23:04:49.884229   27242 main.go:141] libmachine: (ha-856893)   <name>ha-856893</name>
	I0703 23:04:49.884242   27242 main.go:141] libmachine: (ha-856893)   <memory unit='MiB'>2200</memory>
	I0703 23:04:49.884251   27242 main.go:141] libmachine: (ha-856893)   <vcpu>2</vcpu>
	I0703 23:04:49.884257   27242 main.go:141] libmachine: (ha-856893)   <features>
	I0703 23:04:49.884266   27242 main.go:141] libmachine: (ha-856893)     <acpi/>
	I0703 23:04:49.884273   27242 main.go:141] libmachine: (ha-856893)     <apic/>
	I0703 23:04:49.884284   27242 main.go:141] libmachine: (ha-856893)     <pae/>
	I0703 23:04:49.884302   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884313   27242 main.go:141] libmachine: (ha-856893)   </features>
	I0703 23:04:49.884325   27242 main.go:141] libmachine: (ha-856893)   <cpu mode='host-passthrough'>
	I0703 23:04:49.884337   27242 main.go:141] libmachine: (ha-856893)   
	I0703 23:04:49.884343   27242 main.go:141] libmachine: (ha-856893)   </cpu>
	I0703 23:04:49.884354   27242 main.go:141] libmachine: (ha-856893)   <os>
	I0703 23:04:49.884364   27242 main.go:141] libmachine: (ha-856893)     <type>hvm</type>
	I0703 23:04:49.884374   27242 main.go:141] libmachine: (ha-856893)     <boot dev='cdrom'/>
	I0703 23:04:49.884383   27242 main.go:141] libmachine: (ha-856893)     <boot dev='hd'/>
	I0703 23:04:49.884394   27242 main.go:141] libmachine: (ha-856893)     <bootmenu enable='no'/>
	I0703 23:04:49.884406   27242 main.go:141] libmachine: (ha-856893)   </os>
	I0703 23:04:49.884433   27242 main.go:141] libmachine: (ha-856893)   <devices>
	I0703 23:04:49.884459   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='cdrom'>
	I0703 23:04:49.884478   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/boot2docker.iso'/>
	I0703 23:04:49.884490   27242 main.go:141] libmachine: (ha-856893)       <target dev='hdc' bus='scsi'/>
	I0703 23:04:49.884520   27242 main.go:141] libmachine: (ha-856893)       <readonly/>
	I0703 23:04:49.884539   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884550   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='disk'>
	I0703 23:04:49.884564   27242 main.go:141] libmachine: (ha-856893)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:04:49.884581   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk'/>
	I0703 23:04:49.884592   27242 main.go:141] libmachine: (ha-856893)       <target dev='hda' bus='virtio'/>
	I0703 23:04:49.884605   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884623   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884635   27242 main.go:141] libmachine: (ha-856893)       <source network='mk-ha-856893'/>
	I0703 23:04:49.884644   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884657   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884668   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884679   27242 main.go:141] libmachine: (ha-856893)       <source network='default'/>
	I0703 23:04:49.884694   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884705   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884715   27242 main.go:141] libmachine: (ha-856893)     <serial type='pty'>
	I0703 23:04:49.884736   27242 main.go:141] libmachine: (ha-856893)       <target port='0'/>
	I0703 23:04:49.884745   27242 main.go:141] libmachine: (ha-856893)     </serial>
	I0703 23:04:49.884761   27242 main.go:141] libmachine: (ha-856893)     <console type='pty'>
	I0703 23:04:49.884777   27242 main.go:141] libmachine: (ha-856893)       <target type='serial' port='0'/>
	I0703 23:04:49.884789   27242 main.go:141] libmachine: (ha-856893)     </console>
	I0703 23:04:49.884799   27242 main.go:141] libmachine: (ha-856893)     <rng model='virtio'>
	I0703 23:04:49.884810   27242 main.go:141] libmachine: (ha-856893)       <backend model='random'>/dev/random</backend>
	I0703 23:04:49.884819   27242 main.go:141] libmachine: (ha-856893)     </rng>
	I0703 23:04:49.884831   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884838   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884855   27242 main.go:141] libmachine: (ha-856893)   </devices>
	I0703 23:04:49.884874   27242 main.go:141] libmachine: (ha-856893) </domain>
	I0703 23:04:49.884887   27242 main.go:141] libmachine: (ha-856893) 
	I0703 23:04:49.889408   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:7f:ab:67 in network default
	I0703 23:04:49.890000   27242 main.go:141] libmachine: (ha-856893) Ensuring networks are active...
	I0703 23:04:49.890020   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:49.890827   27242 main.go:141] libmachine: (ha-856893) Ensuring network default is active
	I0703 23:04:49.891173   27242 main.go:141] libmachine: (ha-856893) Ensuring network mk-ha-856893 is active
	I0703 23:04:49.891707   27242 main.go:141] libmachine: (ha-856893) Getting domain xml...
	I0703 23:04:49.892417   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:51.076607   27242 main.go:141] libmachine: (ha-856893) Waiting to get IP...
	I0703 23:04:51.077509   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.077950   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.078001   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.077954   27265 retry.go:31] will retry after 279.728515ms: waiting for machine to come up
	I0703 23:04:51.359420   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.359916   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.359951   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.359884   27265 retry.go:31] will retry after 247.648785ms: waiting for machine to come up
	I0703 23:04:51.609238   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.609581   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.609605   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.609536   27265 retry.go:31] will retry after 462.632413ms: waiting for machine to come up
	I0703 23:04:52.074013   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.074458   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.074495   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.074436   27265 retry.go:31] will retry after 535.361005ms: waiting for machine to come up
	I0703 23:04:52.611006   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.611471   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.611499   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.611417   27265 retry.go:31] will retry after 566.856393ms: waiting for machine to come up
	I0703 23:04:53.180116   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:53.180549   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:53.180572   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:53.180514   27265 retry.go:31] will retry after 893.437933ms: waiting for machine to come up
	I0703 23:04:54.075051   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:54.075493   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:54.075541   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:54.075436   27265 retry.go:31] will retry after 1.153111216s: waiting for machine to come up
	I0703 23:04:55.229683   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:55.230080   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:55.230099   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:55.230058   27265 retry.go:31] will retry after 1.209590198s: waiting for machine to come up
	I0703 23:04:56.441430   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:56.441787   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:56.441815   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:56.441765   27265 retry.go:31] will retry after 1.140725525s: waiting for machine to come up
	I0703 23:04:57.583965   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:57.584360   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:57.584387   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:57.584309   27265 retry.go:31] will retry after 2.005681822s: waiting for machine to come up
	I0703 23:04:59.591365   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:59.591779   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:59.591807   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:59.591747   27265 retry.go:31] will retry after 2.709221348s: waiting for machine to come up
	I0703 23:05:02.304438   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:02.304759   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:02.304799   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:02.304723   27265 retry.go:31] will retry after 3.359635089s: waiting for machine to come up
	I0703 23:05:05.666017   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:05.666403   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:05.666432   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:05.666364   27265 retry.go:31] will retry after 3.83770662s: waiting for machine to come up
	I0703 23:05:09.505078   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505551   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505566   27242 main.go:141] libmachine: (ha-856893) Found IP for machine: 192.168.39.172
	I0703 23:05:09.505579   27242 main.go:141] libmachine: (ha-856893) Reserving static IP address...
	I0703 23:05:09.505883   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find host DHCP lease matching {name: "ha-856893", mac: "52:54:00:f8:43:23", ip: "192.168.39.172"} in network mk-ha-856893
	I0703 23:05:09.585944   27242 main.go:141] libmachine: (ha-856893) DBG | Getting to WaitForSSH function...
	I0703 23:05:09.585974   27242 main.go:141] libmachine: (ha-856893) Reserved static IP address: 192.168.39.172
	I0703 23:05:09.585992   27242 main.go:141] libmachine: (ha-856893) Waiting for SSH to be available...
	I0703 23:05:09.588555   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589004   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.589032   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589229   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH client type: external
	I0703 23:05:09.589251   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa (-rw-------)
	I0703 23:05:09.589277   27242 main.go:141] libmachine: (ha-856893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:05:09.589292   27242 main.go:141] libmachine: (ha-856893) DBG | About to run SSH command:
	I0703 23:05:09.589321   27242 main.go:141] libmachine: (ha-856893) DBG | exit 0
	I0703 23:05:09.716024   27242 main.go:141] libmachine: (ha-856893) DBG | SSH cmd err, output: <nil>: 
	I0703 23:05:09.716309   27242 main.go:141] libmachine: (ha-856893) KVM machine creation complete!
	I0703 23:05:09.716633   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:09.717150   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717368   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717544   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:05:09.717558   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:09.718761   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:05:09.718778   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:05:09.718786   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:05:09.718793   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.720891   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721227   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.721246   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721398   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.721581   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721736   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721884   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.722050   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.722255   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.722270   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:05:09.827380   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:09.827404   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:05:09.827412   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.830421   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830736   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.830762   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830957   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.831181   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831359   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831522   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.831674   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.831845   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.831858   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:05:09.940700   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:05:09.940805   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:05:09.940820   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:05:09.940836   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941067   27242 buildroot.go:166] provisioning hostname "ha-856893"
	I0703 23:05:09.941088   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941282   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.943686   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944069   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.944095   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944257   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.944455   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944603   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944740   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.944877   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.945060   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.945071   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893 && echo "ha-856893" | sudo tee /etc/hostname
	I0703 23:05:10.067286   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:05:10.067311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.069961   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070287   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.070308   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070498   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.070682   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.070896   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.071050   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.071212   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.071414   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.071431   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:05:10.189893   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:10.189928   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:05:10.189959   27242 buildroot.go:174] setting up certificates
	I0703 23:05:10.189968   27242 provision.go:84] configureAuth start
	I0703 23:05:10.189976   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:10.190275   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:10.193226   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193602   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.193625   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193795   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.195779   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196097   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.196119   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196195   27242 provision.go:143] copyHostCerts
	I0703 23:05:10.196234   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196277   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:05:10.196304   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196383   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:05:10.196499   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196528   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:05:10.196537   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196576   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:05:10.196682   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196702   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:05:10.196708   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196732   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:05:10.196780   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893 san=[127.0.0.1 192.168.39.172 ha-856893 localhost minikube]
	I0703 23:05:10.449385   27242 provision.go:177] copyRemoteCerts
	I0703 23:05:10.449453   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:05:10.449480   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.452086   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452311   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.452338   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452543   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.452743   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.452885   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.452991   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.538502   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:05:10.538569   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:05:10.565459   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:05:10.565517   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:05:10.591713   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:05:10.591782   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0703 23:05:10.620534   27242 provision.go:87] duration metric: took 430.554362ms to configureAuth
	I0703 23:05:10.620571   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:05:10.620750   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:10.620845   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.623353   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623771   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.623799   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623935   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.624152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624325   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624439   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.624606   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.624765   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.624779   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:05:10.904599   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:05:10.904631   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:05:10.904641   27242 main.go:141] libmachine: (ha-856893) Calling .GetURL
	I0703 23:05:10.905870   27242 main.go:141] libmachine: (ha-856893) DBG | Using libvirt version 6000000
	I0703 23:05:10.907791   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908127   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.908151   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908372   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:05:10.908390   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:05:10.908398   27242 client.go:171] duration metric: took 21.514686715s to LocalClient.Create
	I0703 23:05:10.908429   27242 start.go:167] duration metric: took 21.514763646s to libmachine.API.Create "ha-856893"
	I0703 23:05:10.908441   27242 start.go:293] postStartSetup for "ha-856893" (driver="kvm2")
	I0703 23:05:10.908451   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:05:10.908484   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:10.908725   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:05:10.908748   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.910851   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911184   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.911225   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911349   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.911538   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.911687   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.911796   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.994829   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:05:10.999699   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:05:10.999723   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:05:10.999787   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:05:10.999867   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:05:10.999903   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:05:11.000007   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:05:11.010870   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:11.041611   27242 start.go:296] duration metric: took 133.157203ms for postStartSetup
	I0703 23:05:11.041689   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:11.042230   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.045028   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045417   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.045449   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045801   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:11.046044   27242 start.go:128] duration metric: took 21.670585889s to createHost
	I0703 23:05:11.046071   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.048601   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.048906   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.048929   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.049092   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.049289   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049641   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.049848   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:11.050029   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:11.050041   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:05:11.156804   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047911.130080211
	
	I0703 23:05:11.156825   27242 fix.go:216] guest clock: 1720047911.130080211
	I0703 23:05:11.156833   27242 fix.go:229] Guest: 2024-07-03 23:05:11.130080211 +0000 UTC Remote: 2024-07-03 23:05:11.046058241 +0000 UTC m=+21.776314180 (delta=84.02197ms)
	I0703 23:05:11.156877   27242 fix.go:200] guest clock delta is within tolerance: 84.02197ms
	I0703 23:05:11.156884   27242 start.go:83] releasing machines lock for "ha-856893", held for 21.781493772s
	I0703 23:05:11.156910   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.157171   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.159661   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.159989   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.160008   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.160187   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160682   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160849   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160925   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:05:11.160975   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.161091   27242 ssh_runner.go:195] Run: cat /version.json
	I0703 23:05:11.161115   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.163570   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163644   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163933   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163969   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163996   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164083   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164233   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164361   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164513   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165190   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165203   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165456   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.165594   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.264903   27242 ssh_runner.go:195] Run: systemctl --version
	I0703 23:05:11.271362   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:05:11.431766   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:05:11.437888   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:05:11.437960   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:05:11.456204   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:05:11.456228   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:05:11.456282   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:05:11.478288   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:05:11.496504   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:05:11.496546   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:05:11.513312   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:05:11.529272   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:05:11.651791   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:05:11.833740   27242 docker.go:233] disabling docker service ...
	I0703 23:05:11.833798   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:05:11.850082   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:05:11.864945   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:05:11.993322   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:05:12.121368   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:05:12.136604   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:05:12.156727   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:05:12.156790   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.168812   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:05:12.168881   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.181117   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.193084   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.204859   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:05:12.217389   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.229489   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.248248   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.260054   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:05:12.270988   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:05:12.271050   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:05:12.285900   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:05:12.296588   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:12.421931   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:05:12.567694   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:05:12.567771   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:05:12.573160   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:05:12.573227   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:05:12.577204   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:05:12.618785   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:05:12.618858   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.648983   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.680410   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:05:12.681677   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:12.684268   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684586   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:12.684615   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684826   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:05:12.689291   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:12.702754   27242 kubeadm.go:877] updating cluster {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:05:12.702853   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:12.702897   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:12.737089   27242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 23:05:12.737156   27242 ssh_runner.go:195] Run: which lz4
	I0703 23:05:12.741174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0703 23:05:12.741275   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 23:05:12.745594   27242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:05:12.745632   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 23:05:14.273244   27242 crio.go:462] duration metric: took 1.531990406s to copy over tarball
	I0703 23:05:14.273329   27242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:05:16.532872   27242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.259515995s)
	I0703 23:05:16.532901   27242 crio.go:469] duration metric: took 2.259629155s to extract the tarball
	I0703 23:05:16.532912   27242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:05:16.571634   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:16.617842   27242 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:05:16.617868   27242 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:05:16.617876   27242 kubeadm.go:928] updating node { 192.168.39.172 8443 v1.30.2 crio true true} ...
	I0703 23:05:16.617964   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:05:16.618023   27242 ssh_runner.go:195] Run: crio config
	I0703 23:05:16.664162   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:16.664181   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:16.664189   27242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:05:16.664210   27242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-856893 NodeName:ha-856893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:05:16.664387   27242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-856893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:05:16.664413   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:05:16.664474   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:05:16.682379   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:05:16.682508   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:05:16.682575   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:05:16.693673   27242 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:05:16.693753   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0703 23:05:16.704380   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0703 23:05:16.722634   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:05:16.740879   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0703 23:05:16.759081   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0703 23:05:16.777539   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:05:16.781905   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:16.795594   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:16.932173   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:05:16.960438   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.172
	I0703 23:05:16.960457   27242 certs.go:194] generating shared ca certs ...
	I0703 23:05:16.960471   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:16.960625   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:05:16.960687   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:05:16.960701   27242 certs.go:256] generating profile certs ...
	I0703 23:05:16.960769   27242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:05:16.960789   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt with IP's: []
	I0703 23:05:17.180299   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt ...
	I0703 23:05:17.180327   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt: {Name:mked142f33e96cc69e07cbef413ceae8eaadb6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180495   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key ...
	I0703 23:05:17.180505   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key: {Name:mkda59ba7700af447f9573712b80d771070e40e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180580   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89
	I0703 23:05:17.180594   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.254]
	I0703 23:05:17.268855   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 ...
	I0703 23:05:17.268884   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89: {Name:mk564c544d24be22e8d81f70b99af5878e84b732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269036   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 ...
	I0703 23:05:17.269054   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89: {Name:mk2b21d824f1f5ef781a1bb28b7c84b56246aa84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269126   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:05:17.269222   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:05:17.269280   27242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:05:17.269296   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt with IP's: []
	I0703 23:05:17.337820   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt ...
	I0703 23:05:17.337850   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt: {Name:mk56d081fd7b738fa50b488ebdec0c915931f1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338007   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key ...
	I0703 23:05:17.338017   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key: {Name:mk1bfcc2bc169c4499f89205b355a5beb44be061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:05:17.338101   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:05:17.338111   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:05:17.338124   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:05:17.338136   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:05:17.338155   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:05:17.338167   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:05:17.338184   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:05:17.338228   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:05:17.338258   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:05:17.338267   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:05:17.338290   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:05:17.338309   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:05:17.338334   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:05:17.338368   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:17.338396   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.338409   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.338422   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.338943   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:05:17.367294   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:05:17.394625   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:05:17.421449   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:05:17.448364   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 23:05:17.478967   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:05:17.507381   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:05:17.535692   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:05:17.564746   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:05:17.592808   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:05:17.620310   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:05:17.648069   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:05:17.666458   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:05:17.673016   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:05:17.685065   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690329   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690403   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.696993   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:05:17.709145   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:05:17.721321   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726475   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726555   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.732930   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:05:17.744956   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:05:17.759349   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769931   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769997   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.777908   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:05:17.793803   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:05:17.798683   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:05:17.798746   27242 kubeadm.go:391] StartCluster: {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:05:17.798856   27242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:05:17.798950   27242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:05:17.857895   27242 cri.go:89] found id: ""
	I0703 23:05:17.857958   27242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:05:17.869751   27242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:05:17.881191   27242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:05:17.892752   27242 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:05:17.892774   27242 kubeadm.go:156] found existing configuration files:
	
	I0703 23:05:17.892815   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:05:17.904127   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:05:17.904196   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:05:17.916159   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:05:17.927292   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:05:17.927363   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:05:17.938640   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.949163   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:05:17.949218   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.960636   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:05:17.971220   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:05:17.971276   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:05:17.982313   27242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:05:18.243554   27242 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:05:28.408397   27242 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 23:05:28.408485   27242 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:05:28.408605   27242 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:05:28.408745   27242 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:05:28.408866   27242 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:05:28.408942   27242 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:05:28.410573   27242 out.go:204]   - Generating certificates and keys ...
	I0703 23:05:28.410647   27242 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:05:28.410731   27242 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:05:28.410801   27242 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:05:28.410850   27242 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:05:28.410900   27242 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:05:28.410954   27242 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:05:28.411002   27242 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:05:28.411118   27242 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411163   27242 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:05:28.411315   27242 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411421   27242 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:05:28.411509   27242 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:05:28.411572   27242 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:05:28.411648   27242 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:05:28.411722   27242 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:05:28.411796   27242 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 23:05:28.411892   27242 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:05:28.411981   27242 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:05:28.412064   27242 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:05:28.412191   27242 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:05:28.412266   27242 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:05:28.413911   27242 out.go:204]   - Booting up control plane ...
	I0703 23:05:28.414019   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:05:28.414100   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:05:28.414173   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:05:28.414325   27242 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:05:28.414456   27242 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:05:28.414501   27242 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:05:28.414606   27242 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 23:05:28.414662   27242 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 23:05:28.414710   27242 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.527133ms
	I0703 23:05:28.414781   27242 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 23:05:28.414827   27242 kubeadm.go:309] [api-check] The API server is healthy after 6.123038103s
	I0703 23:05:28.414915   27242 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 23:05:28.415058   27242 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 23:05:28.415150   27242 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 23:05:28.415339   27242 kubeadm.go:309] [mark-control-plane] Marking the node ha-856893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 23:05:28.415422   27242 kubeadm.go:309] [bootstrap-token] Using token: 12qvkr.qb869phsnq1wz0rf
	I0703 23:05:28.416767   27242 out.go:204]   - Configuring RBAC rules ...
	I0703 23:05:28.416884   27242 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 23:05:28.416965   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 23:05:28.417123   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 23:05:28.417274   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 23:05:28.417401   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 23:05:28.417511   27242 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 23:05:28.417640   27242 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 23:05:28.417710   27242 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 23:05:28.417779   27242 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 23:05:28.417788   27242 kubeadm.go:309] 
	I0703 23:05:28.417861   27242 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 23:05:28.417870   27242 kubeadm.go:309] 
	I0703 23:05:28.417956   27242 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 23:05:28.417970   27242 kubeadm.go:309] 
	I0703 23:05:28.418024   27242 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 23:05:28.418077   27242 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 23:05:28.418120   27242 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 23:05:28.418126   27242 kubeadm.go:309] 
	I0703 23:05:28.418170   27242 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 23:05:28.418175   27242 kubeadm.go:309] 
	I0703 23:05:28.418218   27242 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 23:05:28.418224   27242 kubeadm.go:309] 
	I0703 23:05:28.418276   27242 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 23:05:28.418364   27242 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 23:05:28.418464   27242 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 23:05:28.418474   27242 kubeadm.go:309] 
	I0703 23:05:28.418584   27242 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 23:05:28.418691   27242 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 23:05:28.418700   27242 kubeadm.go:309] 
	I0703 23:05:28.418808   27242 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.418931   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 23:05:28.418963   27242 kubeadm.go:309] 	--control-plane 
	I0703 23:05:28.418970   27242 kubeadm.go:309] 
	I0703 23:05:28.419071   27242 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 23:05:28.419080   27242 kubeadm.go:309] 
	I0703 23:05:28.419141   27242 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.419289   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 23:05:28.419304   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:28.419312   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:28.420892   27242 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0703 23:05:28.422220   27242 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0703 23:05:28.428330   27242 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0703 23:05:28.428351   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0703 23:05:28.449233   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0703 23:05:28.863177   27242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 23:05:28.863315   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:28.863314   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893 minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=true
	I0703 23:05:28.927963   27242 ops.go:34] apiserver oom_adj: -16
	I0703 23:05:29.030917   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:29.531769   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.031402   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.531013   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.031167   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.531765   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.031213   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.531657   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.031757   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.531759   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.031901   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.531406   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.032024   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.531604   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.031112   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.531193   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.031109   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.531156   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.031136   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.531321   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.031594   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.531996   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.031087   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.157208   27242 kubeadm.go:1107] duration metric: took 11.293952239s to wait for elevateKubeSystemPrivileges
	W0703 23:05:40.157241   27242 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0703 23:05:40.157249   27242 kubeadm.go:393] duration metric: took 22.358506374s to StartCluster
	I0703 23:05:40.157267   27242 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.157330   27242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.157993   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.158199   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0703 23:05:40.158198   27242 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:40.158313   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:05:40.158221   27242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 23:05:40.158334   27242 addons.go:69] Setting storage-provisioner=true in profile "ha-856893"
	I0703 23:05:40.158356   27242 addons.go:234] Setting addon storage-provisioner=true in "ha-856893"
	I0703 23:05:40.158384   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.158405   27242 addons.go:69] Setting default-storageclass=true in profile "ha-856893"
	I0703 23:05:40.158434   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:40.158449   27242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-856893"
	I0703 23:05:40.158795   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158820   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.158913   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158949   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.173903   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0703 23:05:40.174071   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0703 23:05:40.174340   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174543   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174803   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.174833   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175065   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.175086   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175156   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175396   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175549   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.175675   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.175698   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.177715   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.177916   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0703 23:05:40.178324   27242 cert_rotation.go:137] Starting client certificate rotation controller
	I0703 23:05:40.178475   27242 addons.go:234] Setting addon default-storageclass=true in "ha-856893"
	I0703 23:05:40.178516   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.178892   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.178922   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.191846   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0703 23:05:40.192316   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.192861   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.192886   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.193260   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.193465   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.194323   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0703 23:05:40.194798   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.195263   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.195279   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.195308   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.195583   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.196026   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.196053   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.197291   27242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:05:40.198820   27242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.198841   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 23:05:40.198867   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.202098   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202535   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.202559   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202726   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.202940   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.203083   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.203211   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.211653   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0703 23:05:40.212071   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.212561   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.212584   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.212866   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.213033   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.214663   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.214886   27242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.214899   27242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 23:05:40.214912   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.217534   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.217883   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.217908   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.218063   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.218258   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.218411   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.218546   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.267153   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0703 23:05:40.358079   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.358732   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.781574   27242 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0703 23:05:41.167935   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.167961   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168003   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168024   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168442   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168453   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168444   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168463   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168467   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168491   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168500   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168507   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168472   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168551   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168750   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168769   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168779   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168794   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168802   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168915   27242 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0703 23:05:41.168924   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.168933   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.168937   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.179174   27242 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0703 23:05:41.179856   27242 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0703 23:05:41.179872   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.179901   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.179907   27242 round_trippers.go:473]     Content-Type: application/json
	I0703 23:05:41.179911   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.184900   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:05:41.185231   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.185253   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.185557   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.185577   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.185585   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.187828   27242 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0703 23:05:41.188847   27242 addons.go:510] duration metric: took 1.03063116s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0703 23:05:41.188886   27242 start.go:245] waiting for cluster config update ...
	I0703 23:05:41.188901   27242 start.go:254] writing updated cluster config ...
	I0703 23:05:41.190310   27242 out.go:177] 
	I0703 23:05:41.191599   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:41.191664   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.193011   27242 out.go:177] * Starting "ha-856893-m02" control-plane node in "ha-856893" cluster
	I0703 23:05:41.194050   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:41.194075   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:05:41.194179   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:05:41.194194   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:05:41.194269   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.194484   27242 start.go:360] acquireMachinesLock for ha-856893-m02: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:05:41.194535   27242 start.go:364] duration metric: took 29.239µs to acquireMachinesLock for "ha-856893-m02"
	I0703 23:05:41.194552   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:41.194614   27242 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0703 23:05:41.195906   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:05:41.195988   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:41.196019   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:41.210406   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0703 23:05:41.210841   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:41.211288   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:41.211309   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:41.211576   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:41.211756   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:05:41.211861   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:05:41.212057   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:05:41.212087   27242 client.go:168] LocalClient.Create starting
	I0703 23:05:41.212116   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:05:41.212148   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212165   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212230   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:05:41.212264   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212288   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212315   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:05:41.212327   27242 main.go:141] libmachine: (ha-856893-m02) Calling .PreCreateCheck
	I0703 23:05:41.212497   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:05:41.212940   27242 main.go:141] libmachine: Creating machine...
	I0703 23:05:41.212958   27242 main.go:141] libmachine: (ha-856893-m02) Calling .Create
	I0703 23:05:41.213096   27242 main.go:141] libmachine: (ha-856893-m02) Creating KVM machine...
	I0703 23:05:41.214567   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing default KVM network
	I0703 23:05:41.214736   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing private KVM network mk-ha-856893
	I0703 23:05:41.214862   27242 main.go:141] libmachine: (ha-856893-m02) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.214887   27242 main.go:141] libmachine: (ha-856893-m02) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:05:41.214947   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.214842   27608 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.215063   27242 main.go:141] libmachine: (ha-856893-m02) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:05:41.436860   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.436749   27608 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa...
	I0703 23:05:41.523744   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523612   27608 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk...
	I0703 23:05:41.523793   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing magic tar header
	I0703 23:05:41.523828   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing SSH key tar header
	I0703 23:05:41.523850   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523749   27608 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.523869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02
	I0703 23:05:41.523955   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:05:41.523978   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 (perms=drwx------)
	I0703 23:05:41.523990   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.524009   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:05:41.524021   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:05:41.524031   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:05:41.524041   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home
	I0703 23:05:41.524065   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:05:41.524084   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:05:41.524093   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Skipping /home - not owner
	I0703 23:05:41.524132   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:05:41.524151   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:05:41.524184   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:05:41.524203   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:41.525176   27242 main.go:141] libmachine: (ha-856893-m02) define libvirt domain using xml: 
	I0703 23:05:41.525194   27242 main.go:141] libmachine: (ha-856893-m02) <domain type='kvm'>
	I0703 23:05:41.525204   27242 main.go:141] libmachine: (ha-856893-m02)   <name>ha-856893-m02</name>
	I0703 23:05:41.525211   27242 main.go:141] libmachine: (ha-856893-m02)   <memory unit='MiB'>2200</memory>
	I0703 23:05:41.525218   27242 main.go:141] libmachine: (ha-856893-m02)   <vcpu>2</vcpu>
	I0703 23:05:41.525225   27242 main.go:141] libmachine: (ha-856893-m02)   <features>
	I0703 23:05:41.525234   27242 main.go:141] libmachine: (ha-856893-m02)     <acpi/>
	I0703 23:05:41.525250   27242 main.go:141] libmachine: (ha-856893-m02)     <apic/>
	I0703 23:05:41.525262   27242 main.go:141] libmachine: (ha-856893-m02)     <pae/>
	I0703 23:05:41.525274   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525286   27242 main.go:141] libmachine: (ha-856893-m02)   </features>
	I0703 23:05:41.525297   27242 main.go:141] libmachine: (ha-856893-m02)   <cpu mode='host-passthrough'>
	I0703 23:05:41.525308   27242 main.go:141] libmachine: (ha-856893-m02)   
	I0703 23:05:41.525316   27242 main.go:141] libmachine: (ha-856893-m02)   </cpu>
	I0703 23:05:41.525325   27242 main.go:141] libmachine: (ha-856893-m02)   <os>
	I0703 23:05:41.525336   27242 main.go:141] libmachine: (ha-856893-m02)     <type>hvm</type>
	I0703 23:05:41.525356   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='cdrom'/>
	I0703 23:05:41.525376   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='hd'/>
	I0703 23:05:41.525387   27242 main.go:141] libmachine: (ha-856893-m02)     <bootmenu enable='no'/>
	I0703 23:05:41.525398   27242 main.go:141] libmachine: (ha-856893-m02)   </os>
	I0703 23:05:41.525409   27242 main.go:141] libmachine: (ha-856893-m02)   <devices>
	I0703 23:05:41.525425   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='cdrom'>
	I0703 23:05:41.525442   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/boot2docker.iso'/>
	I0703 23:05:41.525453   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hdc' bus='scsi'/>
	I0703 23:05:41.525461   27242 main.go:141] libmachine: (ha-856893-m02)       <readonly/>
	I0703 23:05:41.525468   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525474   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='disk'>
	I0703 23:05:41.525481   27242 main.go:141] libmachine: (ha-856893-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:05:41.525510   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk'/>
	I0703 23:05:41.525531   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hda' bus='virtio'/>
	I0703 23:05:41.525547   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525564   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525578   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='mk-ha-856893'/>
	I0703 23:05:41.525589   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525602   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525613   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525639   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='default'/>
	I0703 23:05:41.525649   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525661   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525671   27242 main.go:141] libmachine: (ha-856893-m02)     <serial type='pty'>
	I0703 23:05:41.525684   27242 main.go:141] libmachine: (ha-856893-m02)       <target port='0'/>
	I0703 23:05:41.525699   27242 main.go:141] libmachine: (ha-856893-m02)     </serial>
	I0703 23:05:41.525711   27242 main.go:141] libmachine: (ha-856893-m02)     <console type='pty'>
	I0703 23:05:41.525723   27242 main.go:141] libmachine: (ha-856893-m02)       <target type='serial' port='0'/>
	I0703 23:05:41.525733   27242 main.go:141] libmachine: (ha-856893-m02)     </console>
	I0703 23:05:41.525743   27242 main.go:141] libmachine: (ha-856893-m02)     <rng model='virtio'>
	I0703 23:05:41.525757   27242 main.go:141] libmachine: (ha-856893-m02)       <backend model='random'>/dev/random</backend>
	I0703 23:05:41.525778   27242 main.go:141] libmachine: (ha-856893-m02)     </rng>
	I0703 23:05:41.525789   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525797   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525806   27242 main.go:141] libmachine: (ha-856893-m02)   </devices>
	I0703 23:05:41.525815   27242 main.go:141] libmachine: (ha-856893-m02) </domain>
	I0703 23:05:41.525826   27242 main.go:141] libmachine: (ha-856893-m02) 
	I0703 23:05:41.532564   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:87:47:a5 in network default
	I0703 23:05:41.533109   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring networks are active...
	I0703 23:05:41.533130   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:41.533788   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network default is active
	I0703 23:05:41.534054   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network mk-ha-856893 is active
	I0703 23:05:41.534401   27242 main.go:141] libmachine: (ha-856893-m02) Getting domain xml...
	I0703 23:05:41.535101   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:42.768845   27242 main.go:141] libmachine: (ha-856893-m02) Waiting to get IP...
	I0703 23:05:42.769571   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.769959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.770003   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.769952   27608 retry.go:31] will retry after 219.708119ms: waiting for machine to come up
	I0703 23:05:42.991437   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.991986   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.992017   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.991932   27608 retry.go:31] will retry after 272.434306ms: waiting for machine to come up
	I0703 23:05:43.266445   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.266888   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.266916   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.266846   27608 retry.go:31] will retry after 435.377928ms: waiting for machine to come up
	I0703 23:05:43.703359   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.703810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.703838   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.703758   27608 retry.go:31] will retry after 451.040954ms: waiting for machine to come up
	I0703 23:05:44.156129   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.156655   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.156683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.156609   27608 retry.go:31] will retry after 760.280274ms: waiting for machine to come up
	I0703 23:05:44.918103   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.918554   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.918579   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.918505   27608 retry.go:31] will retry after 698.518733ms: waiting for machine to come up
	I0703 23:05:45.618162   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:45.618587   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:45.618614   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:45.618539   27608 retry.go:31] will retry after 993.528309ms: waiting for machine to come up
	I0703 23:05:46.614158   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:46.614719   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:46.614745   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:46.614678   27608 retry.go:31] will retry after 1.327932051s: waiting for machine to come up
	I0703 23:05:47.944596   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:47.945018   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:47.945045   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:47.944978   27608 retry.go:31] will retry after 1.683564403s: waiting for machine to come up
	I0703 23:05:49.630786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:49.631090   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:49.631116   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:49.631040   27608 retry.go:31] will retry after 1.84507818s: waiting for machine to come up
	I0703 23:05:51.477398   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:51.477872   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:51.477893   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:51.477839   27608 retry.go:31] will retry after 1.786726505s: waiting for machine to come up
	I0703 23:05:53.266749   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:53.267104   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:53.267133   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:53.267086   27608 retry.go:31] will retry after 3.479688612s: waiting for machine to come up
	I0703 23:05:56.748688   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:56.749070   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:56.749097   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:56.749047   27608 retry.go:31] will retry after 3.495058467s: waiting for machine to come up
	I0703 23:06:00.248588   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:00.249038   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:06:00.249062   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:06:00.248993   27608 retry.go:31] will retry after 4.710071103s: waiting for machine to come up
	I0703 23:06:04.963165   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963558   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has current primary IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963579   27242 main.go:141] libmachine: (ha-856893-m02) Found IP for machine: 192.168.39.157
	I0703 23:06:04.963599   27242 main.go:141] libmachine: (ha-856893-m02) Reserving static IP address...
	I0703 23:06:04.963959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "ha-856893-m02", mac: "52:54:00:88:5c:3d", ip: "192.168.39.157"} in network mk-ha-856893
	I0703 23:06:05.043210   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:05.043242   27242 main.go:141] libmachine: (ha-856893-m02) Reserved static IP address: 192.168.39.157
	I0703 23:06:05.043256   27242 main.go:141] libmachine: (ha-856893-m02) Waiting for SSH to be available...
	I0703 23:06:05.045810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:05.046139   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893
	I0703 23:06:05.046163   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:88:5c:3d
	I0703 23:06:05.046324   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:05.046345   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:05.046421   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:05.046443   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:05.046462   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:05.050096   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:06:05.050114   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:06:05.050124   27242 main.go:141] libmachine: (ha-856893-m02) DBG | command : exit 0
	I0703 23:06:05.050131   27242 main.go:141] libmachine: (ha-856893-m02) DBG | err     : exit status 255
	I0703 23:06:05.050140   27242 main.go:141] libmachine: (ha-856893-m02) DBG | output  : 
	I0703 23:06:08.051925   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:08.055727   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056153   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.056179   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056333   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:08.056344   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:08.056368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:08.056380   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:08.056395   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:08.180086   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: <nil>: 
	I0703 23:06:08.180375   27242 main.go:141] libmachine: (ha-856893-m02) KVM machine creation complete!
	I0703 23:06:08.180680   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:08.181273   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181738   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:06:08.181772   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetState
	I0703 23:06:08.183073   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:06:08.183084   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:06:08.183090   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:06:08.183097   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.185510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.185869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.185885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.186103   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.186258   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186404   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186562   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.186737   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.186953   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.186971   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:06:08.287312   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.287335   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:06:08.287345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.289859   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290230   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.290255   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290391   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.290601   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290826   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290992   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.291192   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.291400   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.291413   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:06:08.397296   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:06:08.397352   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:06:08.397358   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:06:08.397365   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397596   27242 buildroot.go:166] provisioning hostname "ha-856893-m02"
	I0703 23:06:08.397609   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397805   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.400446   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.400800   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.400824   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.401028   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.401213   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401394   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401516   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.401657   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.401840   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.401855   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m02 && echo "ha-856893-m02" | sudo tee /etc/hostname
	I0703 23:06:08.520319   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m02
	
	I0703 23:06:08.520345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.522961   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523341   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.523368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523587   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.523781   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.523977   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.524116   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.524312   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.524466   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.524481   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:06:08.633867   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.633900   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:06:08.633921   27242 buildroot.go:174] setting up certificates
	I0703 23:06:08.633932   27242 provision.go:84] configureAuth start
	I0703 23:06:08.633945   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.634242   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:08.637222   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637606   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.637629   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637798   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.640510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.640861   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.640885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.641040   27242 provision.go:143] copyHostCerts
	I0703 23:06:08.641075   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641110   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:06:08.641119   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641188   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:06:08.641264   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641289   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:06:08.641295   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641319   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:06:08.641363   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641379   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:06:08.641385   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641406   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:06:08.641461   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m02 san=[127.0.0.1 192.168.39.157 ha-856893-m02 localhost minikube]
	I0703 23:06:08.796742   27242 provision.go:177] copyRemoteCerts
	I0703 23:06:08.796795   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:06:08.796849   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.799514   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.799786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.799814   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.800039   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.800233   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.800418   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.800539   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:08.882648   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:06:08.882725   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:06:08.909249   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:06:08.909332   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:06:08.935044   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:06:08.935123   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:06:08.961479   27242 provision.go:87] duration metric: took 327.532705ms to configureAuth
	I0703 23:06:08.961528   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:06:08.961731   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:08.961796   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.964260   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964562   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.964599   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964761   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.964962   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965132   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965255   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.965414   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.965748   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.965776   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:06:09.252115   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:06:09.252149   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:06:09.252160   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetURL
	I0703 23:06:09.253575   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using libvirt version 6000000
	I0703 23:06:09.255956   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256313   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.256339   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256506   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:06:09.256517   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:06:09.256522   27242 client.go:171] duration metric: took 28.044426812s to LocalClient.Create
	I0703 23:06:09.256545   27242 start.go:167] duration metric: took 28.044488456s to libmachine.API.Create "ha-856893"
	I0703 23:06:09.256558   27242 start.go:293] postStartSetup for "ha-856893-m02" (driver="kvm2")
	I0703 23:06:09.256571   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:06:09.256597   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.256867   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:06:09.256898   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.258897   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259196   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.259239   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259356   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.259535   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.259720   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.259905   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.343496   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:06:09.347947   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:06:09.347969   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:06:09.348034   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:06:09.348116   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:06:09.348127   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:06:09.348228   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:06:09.358974   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:09.386575   27242 start.go:296] duration metric: took 129.995195ms for postStartSetup
	I0703 23:06:09.386638   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:09.387232   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.389784   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390091   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.390121   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390365   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:06:09.390569   27242 start.go:128] duration metric: took 28.195940074s to createHost
	I0703 23:06:09.390602   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.392949   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393304   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.393332   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.393668   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393812   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393960   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.394148   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:09.394332   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:09.394343   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:06:09.496753   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047969.477411010
	
	I0703 23:06:09.496773   27242 fix.go:216] guest clock: 1720047969.477411010
	I0703 23:06:09.496780   27242 fix.go:229] Guest: 2024-07-03 23:06:09.47741101 +0000 UTC Remote: 2024-07-03 23:06:09.39059124 +0000 UTC m=+80.120847171 (delta=86.81977ms)
	I0703 23:06:09.496794   27242 fix.go:200] guest clock delta is within tolerance: 86.81977ms
	I0703 23:06:09.496803   27242 start.go:83] releasing machines lock for "ha-856893-m02", held for 28.302255725s
	I0703 23:06:09.496818   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.497106   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.499993   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.500377   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.500405   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.502889   27242 out.go:177] * Found network options:
	I0703 23:06:09.504348   27242 out.go:177]   - NO_PROXY=192.168.39.172
	W0703 23:06:09.505618   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.505646   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506197   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506364   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506442   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:06:09.506485   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	W0703 23:06:09.506549   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.506631   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:06:09.506648   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.509646   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.509683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510044   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510071   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510094   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510105   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510284   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510625   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510701   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510771   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510887   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.510891   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.511011   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.511022   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.748974   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:06:09.754928   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:06:09.754991   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:06:09.773195   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:06:09.773218   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:06:09.773284   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:06:09.791699   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:06:09.808279   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:06:09.808345   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:06:09.824370   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:06:09.839742   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:06:09.976077   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:06:10.157590   27242 docker.go:233] disabling docker service ...
	I0703 23:06:10.157655   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:06:10.173171   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:06:10.187323   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:06:10.317842   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:06:10.448801   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:06:10.464012   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:06:10.484552   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:06:10.484626   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.495842   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:06:10.495962   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.507047   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.518157   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.529601   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:06:10.541072   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.552143   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.570995   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.582051   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:06:10.592526   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:06:10.592586   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:06:10.607423   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:06:10.617890   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:10.738828   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:06:10.888735   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:06:10.888797   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:06:10.894395   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:06:10.894461   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:06:10.898671   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:06:10.940941   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:06:10.941015   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:10.971313   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:11.002905   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:06:11.004738   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:06:11.006065   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:11.008543   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.008879   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:11.008909   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.009050   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:06:11.013641   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:11.027727   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:06:11.027975   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:11.028270   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.028323   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.044531   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0703 23:06:11.045043   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.045558   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.045579   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.045862   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.046039   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:06:11.047494   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:11.047885   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.047930   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.062704   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0703 23:06:11.063093   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.063555   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.063572   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.063895   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.064071   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:11.064261   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.157
	I0703 23:06:11.064278   27242 certs.go:194] generating shared ca certs ...
	I0703 23:06:11.064297   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.064442   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:06:11.064488   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:06:11.064502   27242 certs.go:256] generating profile certs ...
	I0703 23:06:11.064611   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:06:11.064645   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b
	I0703 23:06:11.064664   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.254]
	I0703 23:06:11.125542   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b ...
	I0703 23:06:11.125570   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b: {Name:mk6b6ba77f2115f78526ecec09853230dd3e53c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125732   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b ...
	I0703 23:06:11.125745   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b: {Name:mkf063a91f34b3b9346f6b304c5ea881bd2f5324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125812   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:06:11.125946   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:06:11.126068   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:06:11.126083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:06:11.126094   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:06:11.126107   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:06:11.126119   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:06:11.126131   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:06:11.126143   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:06:11.126156   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:06:11.126174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:06:11.126219   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:06:11.126254   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:06:11.126262   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:06:11.126284   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:06:11.126304   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:06:11.126325   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:06:11.126365   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:11.126389   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.126403   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.126414   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.126446   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:11.129130   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129526   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:11.129547   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129763   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:11.129991   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:11.130155   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:11.130308   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:11.208220   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:06:11.214445   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:06:11.227338   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:06:11.232205   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:06:11.244770   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:06:11.249486   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:06:11.263595   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:06:11.268404   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:06:11.280311   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:06:11.284783   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:06:11.296982   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:06:11.301718   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:06:11.316760   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:06:11.344751   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:06:11.372405   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:06:11.399264   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:06:11.425913   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0703 23:06:11.453127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:06:11.480939   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:06:11.507887   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:06:11.536077   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:06:11.562896   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:06:11.589792   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:06:11.619857   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:06:11.638186   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:06:11.658574   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:06:11.681046   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:06:11.699440   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:06:11.717487   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:06:11.735967   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:06:11.756625   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:06:11.763174   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:06:11.777088   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782196   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782262   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.789061   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:06:11.802412   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:06:11.815542   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820664   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820720   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.827137   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:06:11.839737   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:06:11.852655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857826   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857882   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.863859   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:06:11.875860   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:06:11.880842   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:06:11.880910   27242 kubeadm.go:928] updating node {m02 192.168.39.157 8443 v1.30.2 crio true true} ...
	I0703 23:06:11.880993   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:06:11.881017   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:06:11.881059   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:06:11.901217   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:06:11.901292   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:06:11.901361   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.912603   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:06:11.912662   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.923700   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0703 23:06:11.923725   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:06:11.923738   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0703 23:06:11.923750   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.923823   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.930352   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:06:11.930395   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:06:18.577968   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.578050   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.584084   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:06:18.584127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:06:24.489268   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:06:24.506069   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.506160   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.510885   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:06:24.510927   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:06:24.948564   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:06:24.961462   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:06:24.980150   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:06:24.998455   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:06:25.016528   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:06:25.020797   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:25.034283   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:25.172768   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:25.191293   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:25.191893   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:25.191940   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:25.207801   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0703 23:06:25.208291   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:25.208871   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:25.208895   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:25.209219   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:25.209391   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:25.209509   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:06:25.209636   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:06:25.209656   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:25.213110   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213539   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:25.213572   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213846   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:25.214062   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:25.214220   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:25.214382   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:25.391200   27242 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:25.391247   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443"
	I0703 23:06:47.544091   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443": (22.152804646s)
	I0703 23:06:47.544127   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:06:48.068945   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m02 minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:06:48.232893   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:06:48.350705   27242 start.go:318] duration metric: took 23.141192018s to joinCluster
	I0703 23:06:48.350794   27242 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:48.351091   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:48.352341   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:06:48.353641   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:48.588280   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:48.608838   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:06:48.609120   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:06:48.609198   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:06:48.609481   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:48.609599   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:48.609611   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:48.609620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:48.609626   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:48.622593   27242 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0703 23:06:49.109815   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.109841   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.109851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.109860   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.119178   27242 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0703 23:06:49.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.609864   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.609873   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.609877   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.613800   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.110707   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.110728   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.110736   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.110740   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.114001   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.609830   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.609883   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.609896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.609903   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.613093   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.613625   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:51.109898   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.109927   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.109937   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.109943   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.113216   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:51.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.609854   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.609862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.609867   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.613350   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.110567   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.110587   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.110594   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.110598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.114275   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.610448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.610473   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.610484   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.610490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.613455   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:52.614165   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:53.110342   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.110372   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.110384   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.110390   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.113932   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:53.610596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.610615   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.610624   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.610628   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.613938   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.110534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.110616   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.110634   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.110642   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.114018   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.610334   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.610351   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.610358   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.610362   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.613905   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.614483   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:55.109792   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.109813   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.109821   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.109824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.113250   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.609747   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.609767   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.609777   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.609783   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.612716   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.613412   27242 node_ready.go:49] node "ha-856893-m02" has status "Ready":"True"
	I0703 23:06:55.613435   27242 node_ready.go:38] duration metric: took 7.003919204s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:55.613447   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:06:55.613534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:06:55.613547   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.613557   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.613562   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.618175   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.623904   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.623988   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:06:55.623996   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.624003   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.624009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.627442   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.628363   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.628382   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.628394   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.628402   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631180   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.631700   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.631719   27242 pod_ready.go:81] duration metric: took 7.786492ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631728   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631796   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:06:55.631806   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.631815   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.635897   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.636658   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.636678   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.636687   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.636692   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.639691   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.640704   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.640723   27242 pod_ready.go:81] duration metric: took 8.987769ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640734   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:06:55.640797   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.640803   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.640807   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.643359   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.643907   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.643924   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.643932   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.643936   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.646899   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.647968   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.647991   27242 pod_ready.go:81] duration metric: took 7.249953ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648004   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648071   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:55.648085   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.648095   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.648101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.650814   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.651459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.651474   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.651486   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.651490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.653793   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:56.148491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.148513   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.148521   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.148525   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.152385   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.153042   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.153060   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.153067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.153071   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.157627   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:56.649122   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.649140   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.649146   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.649149   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.652526   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.653306   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.653320   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.653327   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.653331   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.655979   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.149064   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.149092   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.149101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.149106   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.152417   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.153222   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.153241   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.153249   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.153254   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.156135   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.649140   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.649181   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.649192   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.649198   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.652477   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.653084   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.653100   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.653106   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.653111   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.655555   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.656210   27242 pod_ready.go:102] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:06:58.148254   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.148274   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.148282   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.148286   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.152590   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:58.153465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.153480   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.153488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.153495   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.156588   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:58.648596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.648622   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.648633   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.648639   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.651552   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:58.652309   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.652326   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.652333   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.652338   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.654822   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.148789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.148811   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.148820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.148824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.152583   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.153376   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.153394   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.153401   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.153406   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.156325   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.648919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.648945   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.648956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.648963   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.652540   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.653454   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.653476   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.653487   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.653508   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.658095   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:59.658913   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.658934   27242 pod_ready.go:81] duration metric: took 4.010920952s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.658949   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.659006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:06:59.659016   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.659027   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.659036   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.661826   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.662571   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:59.662588   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.662595   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.662598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.665446   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.665948   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.665968   27242 pod_ready.go:81] duration metric: took 7.012702ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.665978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.666039   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:06:59.666046   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.666053   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.666056   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.668927   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.669628   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.669644   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.669651   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.669656   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.672172   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.167115   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.167140   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.167150   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.167156   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.170205   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:00.170996   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.171017   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.171029   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.171039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.173937   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.666560   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.666581   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.666591   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.666598   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.685399   27242 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0703 23:07:00.686013   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.686031   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.686039   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.686044   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.694695   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:07:01.166491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.166515   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.166524   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.166529   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.170037   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.170694   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.170710   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.170717   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.170722   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.173354   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.666570   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.666592   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.666600   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.666603   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670182   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.670960   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.670972   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.670980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670984   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.673678   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.674253   27242 pod_ready.go:102] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:02.166192   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:02.166222   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.166234   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.166241   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.169265   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.170194   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.170209   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.170217   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.170220   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.173318   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.173900   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.173921   27242 pod_ready.go:81] duration metric: took 2.507930848s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173934   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173990   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:07:02.173999   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.174007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.174011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.177819   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.178515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:02.178531   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.178539   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.178542   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.181392   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.181852   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.181870   27242 pod_ready.go:81] duration metric: took 7.929988ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.181879   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.210176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.210204   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.210225   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.216238   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:07:02.410326   27242 request.go:629] Waited for 193.332004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410396   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410402   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.410409   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.410414   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.414343   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.682063   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.682086   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.682094   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.682099   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.685969   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.809842   27242 request.go:629] Waited for 123.198326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809924   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.809931   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.809935   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.813615   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.182561   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.182583   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.182591   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.182595   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.185818   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.210189   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.210213   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.210226   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.212835   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:03.682870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.682893   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.682904   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.682913   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.687007   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:03.687982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.688000   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.688007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.688010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.690789   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.182980   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.183005   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.183012   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.183015   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.187120   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:04.187803   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.187820   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.187827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.187832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.190585   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.191265   27242 pod_ready.go:102] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:04.682068   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.682093   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.682101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.682105   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.685315   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.686021   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.686042   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.686051   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.686060   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.689699   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.690333   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.690354   27242 pod_ready.go:81] duration metric: took 2.508468638s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690363   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:07:04.690423   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.690429   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.690433   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.693270   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.810198   27242 request.go:629] Waited for 116.3003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810277   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810287   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.810297   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.810306   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.813548   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.814288   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.814310   27242 pod_ready.go:81] duration metric: took 123.940721ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.814321   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.009731   27242 request.go:629] Waited for 195.334691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009801   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009812   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.009823   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.009831   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.013135   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.209785   27242 request.go:629] Waited for 196.045433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209863   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209876   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.209888   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.209896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.213369   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.213938   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.213964   27242 pod_ready.go:81] duration metric: took 399.631019ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.213978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.410292   27242 request.go:629] Waited for 196.24208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410382   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.410392   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.410398   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.413436   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.610477   27242 request.go:629] Waited for 196.362666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610529   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610542   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.610550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.610554   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.613467   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:05.613972   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.613988   27242 pod_ready.go:81] duration metric: took 399.999359ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.613996   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.810106   27242 request.go:629] Waited for 196.052695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810185   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.810209   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.810232   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.813771   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.009910   27242 request.go:629] Waited for 195.274604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009992   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.010002   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.010010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.013701   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.014446   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:06.014463   27242 pod_ready.go:81] duration metric: took 400.459709ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:06.014476   27242 pod_ready.go:38] duration metric: took 10.401015204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:07:06.014493   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:07:06.014549   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:07:06.030327   27242 api_server.go:72] duration metric: took 17.679497097s to wait for apiserver process to appear ...
	I0703 23:07:06.030347   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:07:06.030365   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:07:06.036783   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:07:06.036854   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:07:06.036859   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.036867   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.036872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.037690   27242 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0703 23:07:06.037801   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:07:06.037818   27242 api_server.go:131] duration metric: took 7.465872ms to wait for apiserver health ...
	I0703 23:07:06.037825   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:07:06.209877   27242 request.go:629] Waited for 171.974222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210016   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210032   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.210040   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.210046   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.214918   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.219567   27242 system_pods.go:59] 17 kube-system pods found
	I0703 23:07:06.219598   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.219602   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.219607   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.219610   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.219614   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.219617   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.219620   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.219623   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.219628   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.219637   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.219643   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.219648   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.219658   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.219664   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.219669   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.219676   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.219682   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.219693   27242 system_pods.go:74] duration metric: took 181.861646ms to wait for pod list to return data ...
	I0703 23:07:06.219700   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:07:06.410182   27242 request.go:629] Waited for 190.397554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410264   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410274   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.410285   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.410289   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.413289   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:06.413480   27242 default_sa.go:45] found service account: "default"
	I0703 23:07:06.413495   27242 default_sa.go:55] duration metric: took 193.786983ms for default service account to be created ...
	I0703 23:07:06.413503   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:07:06.609837   27242 request.go:629] Waited for 196.27709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609895   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609901   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.609908   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.609912   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.614868   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.619343   27242 system_pods.go:86] 17 kube-system pods found
	I0703 23:07:06.619371   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.619376   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.619380   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.619384   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.619388   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.619392   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.619395   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.619400   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.619404   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.619408   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.619412   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.619416   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.619420   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.619424   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.619428   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.619433   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.619437   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.619444   27242 system_pods.go:126] duration metric: took 205.937561ms to wait for k8s-apps to be running ...
	I0703 23:07:06.619453   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:07:06.619502   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:06.636194   27242 system_svc.go:56] duration metric: took 16.729677ms WaitForService to wait for kubelet
	I0703 23:07:06.636223   27242 kubeadm.go:576] duration metric: took 18.285397296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:07:06.636240   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:07:06.810678   27242 request.go:629] Waited for 174.367698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810751   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810759   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.810766   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.810773   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.814396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.815321   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815347   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815358   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815361   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815365   27242 node_conditions.go:105] duration metric: took 179.120869ms to run NodePressure ...
	I0703 23:07:06.815375   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:07:06.815405   27242 start.go:254] writing updated cluster config ...
	I0703 23:07:06.817467   27242 out.go:177] 
	I0703 23:07:06.818836   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:06.818926   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.820500   27242 out.go:177] * Starting "ha-856893-m03" control-plane node in "ha-856893" cluster
	I0703 23:07:06.821716   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:07:06.821732   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:07:06.821877   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:07:06.821891   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:07:06.821981   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.822155   27242 start.go:360] acquireMachinesLock for ha-856893-m03: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:07:06.822195   27242 start.go:364] duration metric: took 22.144µs to acquireMachinesLock for "ha-856893-m03"
	I0703 23:07:06.822209   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:06.822295   27242 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0703 23:07:06.823658   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:07:06.823727   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:06.823756   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:06.838452   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0703 23:07:06.838936   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:06.839363   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:06.839383   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:06.839736   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:06.839918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:06.840069   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:06.840226   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:07:06.840254   27242 client.go:168] LocalClient.Create starting
	I0703 23:07:06.840290   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:07:06.840327   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840346   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840410   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:07:06.840432   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840449   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840474   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:07:06.840485   27242 main.go:141] libmachine: (ha-856893-m03) Calling .PreCreateCheck
	I0703 23:07:06.840643   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:06.841024   27242 main.go:141] libmachine: Creating machine...
	I0703 23:07:06.841038   27242 main.go:141] libmachine: (ha-856893-m03) Calling .Create
	I0703 23:07:06.841188   27242 main.go:141] libmachine: (ha-856893-m03) Creating KVM machine...
	I0703 23:07:06.842688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing default KVM network
	I0703 23:07:06.842868   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing private KVM network mk-ha-856893
	I0703 23:07:06.843022   27242 main.go:141] libmachine: (ha-856893-m03) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:06.843049   27242 main.go:141] libmachine: (ha-856893-m03) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:07:06.843102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:06.842997   28071 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:06.843189   27242 main.go:141] libmachine: (ha-856893-m03) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:07:07.067762   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.067633   28071 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa...
	I0703 23:07:07.216110   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.215993   28071 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk...
	I0703 23:07:07.216138   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing magic tar header
	I0703 23:07:07.216158   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing SSH key tar header
	I0703 23:07:07.216172   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.216113   28071 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:07.216256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03
	I0703 23:07:07.216285   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 (perms=drwx------)
	I0703 23:07:07.216298   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:07:07.216313   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:07:07.216337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:07.216352   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:07:07.216366   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:07:07.216383   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:07:07.216405   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:07:07.216424   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:07:07.216451   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home
	I0703 23:07:07.216463   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Skipping /home - not owner
	I0703 23:07:07.216477   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:07:07.216497   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:07:07.216508   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:07.217338   27242 main.go:141] libmachine: (ha-856893-m03) define libvirt domain using xml: 
	I0703 23:07:07.217359   27242 main.go:141] libmachine: (ha-856893-m03) <domain type='kvm'>
	I0703 23:07:07.217366   27242 main.go:141] libmachine: (ha-856893-m03)   <name>ha-856893-m03</name>
	I0703 23:07:07.217375   27242 main.go:141] libmachine: (ha-856893-m03)   <memory unit='MiB'>2200</memory>
	I0703 23:07:07.217404   27242 main.go:141] libmachine: (ha-856893-m03)   <vcpu>2</vcpu>
	I0703 23:07:07.217426   27242 main.go:141] libmachine: (ha-856893-m03)   <features>
	I0703 23:07:07.217439   27242 main.go:141] libmachine: (ha-856893-m03)     <acpi/>
	I0703 23:07:07.217450   27242 main.go:141] libmachine: (ha-856893-m03)     <apic/>
	I0703 23:07:07.217460   27242 main.go:141] libmachine: (ha-856893-m03)     <pae/>
	I0703 23:07:07.217471   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217482   27242 main.go:141] libmachine: (ha-856893-m03)   </features>
	I0703 23:07:07.217492   27242 main.go:141] libmachine: (ha-856893-m03)   <cpu mode='host-passthrough'>
	I0703 23:07:07.217510   27242 main.go:141] libmachine: (ha-856893-m03)   
	I0703 23:07:07.217527   27242 main.go:141] libmachine: (ha-856893-m03)   </cpu>
	I0703 23:07:07.217543   27242 main.go:141] libmachine: (ha-856893-m03)   <os>
	I0703 23:07:07.217559   27242 main.go:141] libmachine: (ha-856893-m03)     <type>hvm</type>
	I0703 23:07:07.217570   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='cdrom'/>
	I0703 23:07:07.217575   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='hd'/>
	I0703 23:07:07.217583   27242 main.go:141] libmachine: (ha-856893-m03)     <bootmenu enable='no'/>
	I0703 23:07:07.217591   27242 main.go:141] libmachine: (ha-856893-m03)   </os>
	I0703 23:07:07.217599   27242 main.go:141] libmachine: (ha-856893-m03)   <devices>
	I0703 23:07:07.217604   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='cdrom'>
	I0703 23:07:07.217614   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/boot2docker.iso'/>
	I0703 23:07:07.217621   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hdc' bus='scsi'/>
	I0703 23:07:07.217635   27242 main.go:141] libmachine: (ha-856893-m03)       <readonly/>
	I0703 23:07:07.217651   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217665   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='disk'>
	I0703 23:07:07.217676   27242 main.go:141] libmachine: (ha-856893-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:07:07.217694   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk'/>
	I0703 23:07:07.217706   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hda' bus='virtio'/>
	I0703 23:07:07.217718   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217733   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217747   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='mk-ha-856893'/>
	I0703 23:07:07.217757   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217767   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217778   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217804   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='default'/>
	I0703 23:07:07.217821   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217830   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217837   27242 main.go:141] libmachine: (ha-856893-m03)     <serial type='pty'>
	I0703 23:07:07.217844   27242 main.go:141] libmachine: (ha-856893-m03)       <target port='0'/>
	I0703 23:07:07.217853   27242 main.go:141] libmachine: (ha-856893-m03)     </serial>
	I0703 23:07:07.217862   27242 main.go:141] libmachine: (ha-856893-m03)     <console type='pty'>
	I0703 23:07:07.217873   27242 main.go:141] libmachine: (ha-856893-m03)       <target type='serial' port='0'/>
	I0703 23:07:07.217883   27242 main.go:141] libmachine: (ha-856893-m03)     </console>
	I0703 23:07:07.217893   27242 main.go:141] libmachine: (ha-856893-m03)     <rng model='virtio'>
	I0703 23:07:07.217903   27242 main.go:141] libmachine: (ha-856893-m03)       <backend model='random'>/dev/random</backend>
	I0703 23:07:07.217917   27242 main.go:141] libmachine: (ha-856893-m03)     </rng>
	I0703 23:07:07.217941   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217959   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217972   27242 main.go:141] libmachine: (ha-856893-m03)   </devices>
	I0703 23:07:07.217982   27242 main.go:141] libmachine: (ha-856893-m03) </domain>
	I0703 23:07:07.217997   27242 main.go:141] libmachine: (ha-856893-m03) 
	I0703 23:07:07.224727   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:c9:f0:2c in network default
	I0703 23:07:07.225301   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:07.225318   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring networks are active...
	I0703 23:07:07.226041   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network default is active
	I0703 23:07:07.226394   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network mk-ha-856893 is active
	I0703 23:07:07.226752   27242 main.go:141] libmachine: (ha-856893-m03) Getting domain xml...
	I0703 23:07:07.227531   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:08.474940   27242 main.go:141] libmachine: (ha-856893-m03) Waiting to get IP...
	I0703 23:07:08.475929   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.476406   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.476429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.476388   28071 retry.go:31] will retry after 297.28942ms: waiting for machine to come up
	I0703 23:07:08.775075   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.775657   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.775687   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.775611   28071 retry.go:31] will retry after 260.487003ms: waiting for machine to come up
	I0703 23:07:09.038093   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.038543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.038570   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.038494   28071 retry.go:31] will retry after 356.550698ms: waiting for machine to come up
	I0703 23:07:09.396841   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.397258   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.397282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.397203   28071 retry.go:31] will retry after 565.372677ms: waiting for machine to come up
	I0703 23:07:09.963728   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.964167   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.964188   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.964122   28071 retry.go:31] will retry after 573.536697ms: waiting for machine to come up
	I0703 23:07:10.539640   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:10.540032   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:10.540082   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:10.540012   28071 retry.go:31] will retry after 887.46227ms: waiting for machine to come up
	I0703 23:07:11.430282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:11.430740   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:11.430768   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:11.430695   28071 retry.go:31] will retry after 941.491473ms: waiting for machine to come up
	I0703 23:07:12.373968   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:12.374294   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:12.374322   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:12.374269   28071 retry.go:31] will retry after 1.104133505s: waiting for machine to come up
	I0703 23:07:13.479543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:13.480022   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:13.480050   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:13.479968   28071 retry.go:31] will retry after 1.21416202s: waiting for machine to come up
	I0703 23:07:14.696397   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:14.696937   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:14.696966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:14.696888   28071 retry.go:31] will retry after 1.787823566s: waiting for machine to come up
	I0703 23:07:16.486978   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:16.487567   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:16.487594   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:16.487515   28071 retry.go:31] will retry after 2.71693532s: waiting for machine to come up
	I0703 23:07:19.206063   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:19.206532   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:19.206556   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:19.206496   28071 retry.go:31] will retry after 2.779815264s: waiting for machine to come up
	I0703 23:07:21.987373   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:21.987801   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:21.987822   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:21.987757   28071 retry.go:31] will retry after 4.466413602s: waiting for machine to come up
	I0703 23:07:26.457850   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:26.458259   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:26.458289   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:26.458211   28071 retry.go:31] will retry after 4.340225073s: waiting for machine to come up
	I0703 23:07:30.801191   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801617   27242 main.go:141] libmachine: (ha-856893-m03) Found IP for machine: 192.168.39.186
	I0703 23:07:30.801638   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has current primary IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801645   27242 main.go:141] libmachine: (ha-856893-m03) Reserving static IP address...
	I0703 23:07:30.801999   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "ha-856893-m03", mac: "52:54:00:cb:e8:37", ip: "192.168.39.186"} in network mk-ha-856893
	I0703 23:07:30.882616   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:30.882638   27242 main.go:141] libmachine: (ha-856893-m03) Reserved static IP address: 192.168.39.186
	I0703 23:07:30.882649   27242 main.go:141] libmachine: (ha-856893-m03) Waiting for SSH to be available...
	I0703 23:07:30.885337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.885691   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893
	I0703 23:07:30.885733   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:cb:e8:37
	I0703 23:07:30.885860   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:30.885892   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:30.885924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:30.885938   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:30.885954   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:30.889872   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:07:30.889897   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:07:30.889906   27242 main.go:141] libmachine: (ha-856893-m03) DBG | command : exit 0
	I0703 23:07:30.889912   27242 main.go:141] libmachine: (ha-856893-m03) DBG | err     : exit status 255
	I0703 23:07:30.889924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | output  : 
	I0703 23:07:33.891677   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:33.894047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894452   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:33.894489   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894620   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:33.894646   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:33.894674   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:33.894692   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:33.894713   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:34.020118   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: <nil>: 
	I0703 23:07:34.020375   27242 main.go:141] libmachine: (ha-856893-m03) KVM machine creation complete!
	I0703 23:07:34.020757   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:34.021289   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021526   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021689   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:07:34.021707   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetState
	I0703 23:07:34.023123   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:07:34.023138   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:07:34.023143   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:07:34.023149   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.025507   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.025894   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.025914   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.026099   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.026281   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026437   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026592   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.026726   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.026934   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.026944   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:07:34.135745   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.135768   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:07:34.135780   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.138736   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139145   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.139180   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139394   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.139768   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.139989   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.140173   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.140391   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.140627   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.140645   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:07:34.252832   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:07:34.252930   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:07:34.252950   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:07:34.252959   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253225   27242 buildroot.go:166] provisioning hostname "ha-856893-m03"
	I0703 23:07:34.253251   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253430   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.256044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256422   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.256449   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256567   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.256736   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.256887   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.257011   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.257189   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.257390   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.257403   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m03 && echo "ha-856893-m03" | sudo tee /etc/hostname
	I0703 23:07:34.378754   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m03
	
	I0703 23:07:34.378782   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.381654   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.381966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.382002   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.382235   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.382443   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382616   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.382982   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.383164   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.383188   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:07:34.499458   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.499488   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:07:34.499506   27242 buildroot.go:174] setting up certificates
	I0703 23:07:34.499514   27242 provision.go:84] configureAuth start
	I0703 23:07:34.499522   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.499784   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:34.503044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503446   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.503473   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503688   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.506053   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506402   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.506429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506591   27242 provision.go:143] copyHostCerts
	I0703 23:07:34.506619   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506654   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:07:34.506666   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506747   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:07:34.506861   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506886   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:07:34.506891   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506928   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:07:34.506984   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507007   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:07:34.507016   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507046   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:07:34.507111   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m03 san=[127.0.0.1 192.168.39.186 ha-856893-m03 localhost minikube]
	I0703 23:07:34.691119   27242 provision.go:177] copyRemoteCerts
	I0703 23:07:34.691175   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:07:34.691195   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.693763   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.694129   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694311   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.694502   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.694665   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.694864   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:34.778514   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:07:34.778586   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:07:34.805663   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:07:34.805731   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:07:34.834448   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:07:34.834507   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:07:34.863423   27242 provision.go:87] duration metric: took 363.896644ms to configureAuth
	I0703 23:07:34.863450   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:07:34.863660   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:34.863743   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.866154   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866486   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.866518   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866663   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.866918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867093   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867227   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.867371   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.867582   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.867596   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:07:35.163731   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:07:35.163761   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:07:35.163770   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetURL
	I0703 23:07:35.165134   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using libvirt version 6000000
	I0703 23:07:35.167475   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.167858   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.167903   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.168131   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:07:35.168152   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:07:35.168160   27242 client.go:171] duration metric: took 28.327898073s to LocalClient.Create
	I0703 23:07:35.168185   27242 start.go:167] duration metric: took 28.327960056s to libmachine.API.Create "ha-856893"
	I0703 23:07:35.168196   27242 start.go:293] postStartSetup for "ha-856893-m03" (driver="kvm2")
	I0703 23:07:35.168208   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:07:35.168229   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.168465   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:07:35.168488   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.170847   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171220   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.171254   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171456   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.171671   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.171851   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.172018   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.255274   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:07:35.260351   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:07:35.260377   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:07:35.260467   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:07:35.260568   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:07:35.260583   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:07:35.260687   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:07:35.272083   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:35.299979   27242 start.go:296] duration metric: took 131.767901ms for postStartSetup
	I0703 23:07:35.300032   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:35.300664   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.303344   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.303779   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.303810   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.304247   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:35.304465   27242 start.go:128] duration metric: took 28.482160498s to createHost
	I0703 23:07:35.304487   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.307047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307392   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.307420   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307576   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.307798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308015   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308182   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.308380   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:35.308593   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:35.308607   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:07:35.420983   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720048055.401183800
	
	I0703 23:07:35.421004   27242 fix.go:216] guest clock: 1720048055.401183800
	I0703 23:07:35.421014   27242 fix.go:229] Guest: 2024-07-03 23:07:35.4011838 +0000 UTC Remote: 2024-07-03 23:07:35.304476938 +0000 UTC m=+166.034732868 (delta=96.706862ms)
	I0703 23:07:35.421033   27242 fix.go:200] guest clock delta is within tolerance: 96.706862ms
	I0703 23:07:35.421039   27242 start.go:83] releasing machines lock for "ha-856893-m03", held for 28.598837371s
	I0703 23:07:35.421065   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.421372   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.424018   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.424405   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.424434   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.426624   27242 out.go:177] * Found network options:
	I0703 23:07:35.427853   27242 out.go:177]   - NO_PROXY=192.168.39.172,192.168.39.157
	W0703 23:07:35.428985   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.429002   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.429017   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429617   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429822   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429928   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:07:35.429966   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	W0703 23:07:35.429991   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.430012   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.430073   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:07:35.430097   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.433231   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433599   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433639   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433738   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433819   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.433836   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.434034   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434104   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434184   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434316   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434344   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.434511   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.677657   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:07:35.684280   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:07:35.684340   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:07:35.700677   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:07:35.700696   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:07:35.700755   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:07:35.716908   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:07:35.731925   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:07:35.731993   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:07:35.747595   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:07:35.763296   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:07:35.878408   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:07:36.053007   27242 docker.go:233] disabling docker service ...
	I0703 23:07:36.053096   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:07:36.069537   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:07:36.084154   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:07:36.219803   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:07:36.349909   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:07:36.365327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:07:36.386397   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:07:36.386449   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.398525   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:07:36.398584   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.410492   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.422111   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.433451   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:07:36.445276   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.456898   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.477619   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.489825   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:07:36.501128   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:07:36.501191   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:07:36.516569   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:07:36.527341   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:36.659461   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:07:36.809855   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:07:36.809927   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:07:36.815110   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:07:36.815186   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:07:36.819348   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:07:36.866612   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:07:36.866700   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.896618   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.932621   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:07:36.933935   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:07:36.935273   27242 out.go:177]   - env NO_PROXY=192.168.39.172,192.168.39.157
	I0703 23:07:36.936545   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:36.939214   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939560   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:36.939587   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939811   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:07:36.944619   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:36.957968   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:07:36.958224   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:36.958474   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.958515   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.973765   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0703 23:07:36.974194   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.974697   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.974717   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.975026   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.975263   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:07:36.976873   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:36.977188   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.977223   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.992987   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0703 23:07:36.993384   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.993860   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.993887   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.994194   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.994378   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:36.994557   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.186
	I0703 23:07:36.994567   27242 certs.go:194] generating shared ca certs ...
	I0703 23:07:36.994580   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:36.994707   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:07:36.994743   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:07:36.994752   27242 certs.go:256] generating profile certs ...
	I0703 23:07:36.994817   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:07:36.994840   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228
	I0703 23:07:36.994854   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.186 192.168.39.254]
	I0703 23:07:37.337183   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 ...
	I0703 23:07:37.337219   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228: {Name:mk67b34580ae56e313e039e356b49a596df2616e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337409   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 ...
	I0703 23:07:37.337428   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228: {Name:mk926f699ebfb8cd1cc65b70f9375a71b834773b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337526   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:07:37.337675   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:07:37.337825   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:07:37.337842   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:07:37.337858   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:07:37.337874   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:07:37.337893   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:07:37.337911   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:07:37.337929   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:07:37.337945   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:07:37.337962   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:07:37.338026   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:07:37.338066   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:07:37.338079   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:07:37.338112   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:07:37.338144   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:07:37.338183   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:07:37.338236   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:37.338272   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:07:37.338293   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.338311   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:37.338353   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:37.341309   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341713   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:37.341753   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341942   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:37.342152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:37.342311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:37.342478   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:37.416222   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:07:37.421398   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:07:37.433219   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:07:37.438229   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:07:37.450051   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:07:37.454475   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:07:37.465922   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:07:37.470453   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:07:37.482305   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:07:37.486680   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:07:37.498268   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:07:37.503288   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:07:37.515695   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:07:37.543420   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:07:37.571775   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:07:37.601487   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:07:37.630721   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0703 23:07:37.665301   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:07:37.692166   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:07:37.719787   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:07:37.751460   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:07:37.778803   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:07:37.805997   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:07:37.832086   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:07:37.850763   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:07:37.869670   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:07:37.888584   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:07:37.906796   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:07:37.924790   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:07:37.943082   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:07:37.963450   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:07:37.970013   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:07:37.981740   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986778   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986831   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.993242   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:07:38.004656   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:07:38.016695   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021674   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021728   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.027634   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:07:38.039118   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:07:38.050655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055464   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055548   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.061625   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:07:38.073265   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:07:38.078693   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:07:38.078753   27242 kubeadm.go:928] updating node {m03 192.168.39.186 8443 v1.30.2 crio true true} ...
	I0703 23:07:38.078862   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:07:38.078895   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:07:38.078937   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:07:38.096141   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:07:38.096245   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:07:38.096299   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.107262   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:07:38.107316   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.118852   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0703 23:07:38.118915   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:38.118922   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:07:38.118857   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0703 23:07:38.118960   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.119033   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.118941   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.119135   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.137934   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.137967   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:07:38.137996   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:07:38.137999   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:07:38.138014   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:07:38.138057   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.149338   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:07:38.149380   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:07:39.190629   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:07:39.200854   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:07:39.219472   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:07:39.238369   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:07:39.256931   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:07:39.261281   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:39.275182   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:39.397746   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:07:39.415272   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:39.415637   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:39.415672   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:39.432698   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0703 23:07:39.433090   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:39.433538   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:39.433562   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:39.433859   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:39.434046   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:39.434186   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:07:39.434327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:07:39.434341   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:39.437296   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437726   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:39.437760   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437962   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:39.438140   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:39.438348   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:39.438503   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:39.593405   27242 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:39.593461   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I0703 23:08:02.813599   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (23.220101132s)
	I0703 23:08:02.813663   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:08:03.385422   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m03 minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:08:03.515792   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:08:03.619588   27242 start.go:318] duration metric: took 24.185396632s to joinCluster
	I0703 23:08:03.619710   27242 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:08:03.620031   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:08:03.621348   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:08:03.622685   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:08:03.881282   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:08:03.907961   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:08:03.908243   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:08:03.908323   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:08:03.908583   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:03.908688   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:03.908697   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:03.908707   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:03.908713   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:03.912712   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:04.408879   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.408907   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.408919   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.408925   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.414154   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:04.909645   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.909672   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.909683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.909689   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.914163   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.409099   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.409119   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.409127   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.409131   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.413290   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.908819   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.908842   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.908849   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.908853   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.913655   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.914382   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:06.409134   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.409160   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.409170   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.409175   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.412666   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:06.909606   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.909627   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.909637   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.909645   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.913376   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:07.409370   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.409394   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.409408   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.409414   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.416499   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:07.909141   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.909171   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.909181   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.909186   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.914036   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:07.914974   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:08.409386   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.409412   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.409423   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.409441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.413022   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:08.909609   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.909634   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.909646   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.909651   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.913449   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:09.409635   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.409658   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.409669   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.409675   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.413889   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:09.909448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.909468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.909477   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.909482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.913589   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:10.409105   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.409125   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.409134   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.409139   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.412940   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.413603   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:10.909037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.909064   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.909075   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.909081   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916194   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:10.916783   27242 node_ready.go:49] node "ha-856893-m03" has status "Ready":"True"
	I0703 23:08:10.916802   27242 node_ready.go:38] duration metric: took 7.008205065s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:10.916818   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:10.916888   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:10.916897   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.916904   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916912   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.923686   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:10.929901   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.930006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:08:10.930018   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.930028   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.930034   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.933138   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.933987   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.934003   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.934020   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.934026   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.937163   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.937765   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.937784   27242 pod_ready.go:81] duration metric: took 7.857453ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937795   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937850   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:08:10.937858   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.937865   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.937872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.940806   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.941415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.941431   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.941441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.941446   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.944345   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.944919   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.944938   27242 pod_ready.go:81] duration metric: took 7.136212ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944947   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944993   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:08:10.945001   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.945008   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.945011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.947818   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.948517   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.948534   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.948544   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.948552   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.951211   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.951848   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.951863   27242 pod_ready.go:81] duration metric: took 6.910613ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951888   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951954   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:08:10.951965   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.951974   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.951980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.954591   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.955176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:10.955193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.955202   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.955208   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.957501   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.958008   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.958025   27242 pod_ready.go:81] duration metric: took 6.129203ms for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.958033   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:11.109948   27242 request.go:629] Waited for 151.854764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110047   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.110057   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.110067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.115838   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.309816   27242 request.go:629] Waited for 193.188796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.309886   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.309892   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.313593   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.509365   27242 request.go:629] Waited for 50.202967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509477   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.509489   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.509500   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.514572   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.709248   27242 request.go:629] Waited for 193.32848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709299   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709304   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.709325   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.709333   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.713036   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.959125   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.959147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.959155   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.959160   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.963102   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.109001   27242 request.go:629] Waited for 144.798659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109057   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109062   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.109071   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.109077   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.112847   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.458780   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.458804   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.458816   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.458822   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.462522   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.509515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.509539   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.509550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.509556   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.513776   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.958862   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.958884   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.958892   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.958896   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.963076   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.964032   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.964055   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.964066   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.964072   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.967555   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.968207   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:13.458279   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.458306   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.458322   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.458327   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.461824   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:13.462472   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.462489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.462497   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.462506   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.465331   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:13.958289   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.958310   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.958318   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.958324   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.962681   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:13.963320   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.963333   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.963340   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.963344   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.966600   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.458259   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.458282   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.458290   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.458293   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.462012   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.462555   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.462570   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.462577   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.462581   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.465499   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:14.959177   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.959199   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.959207   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.959212   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.962396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.963280   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.963296   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.963304   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.963309   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.966765   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.459098   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.459127   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.459137   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.459142   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.462880   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.463536   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.463554   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.463565   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.463573   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.466897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.467438   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:15.958824   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.958850   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.958862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.958870   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.964122   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:15.964870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.964888   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.964896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.964900   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.967828   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.459240   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:16.459265   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.459275   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.459283   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.462430   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.463285   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.463301   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.463308   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.463312   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.466431   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.467055   27242 pod_ready.go:92] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.467074   27242 pod_ready.go:81] duration metric: took 5.509032519s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467090   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467139   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:08:16.467147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.467154   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.467159   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470113   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.470753   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:16.470768   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.470775   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470781   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.479436   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:16.479957   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.479976   27242 pod_ready.go:81] duration metric: took 12.880584ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.479986   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.480043   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:08:16.480051   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.480058   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.480068   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.483359   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.509453   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:16.509489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.509499   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.509506   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.514051   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.514499   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.514518   27242 pod_ready.go:81] duration metric: took 34.526271ms for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.514527   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.709759   27242 request.go:629] Waited for 195.170406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709834   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709841   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.709851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.709858   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.714113   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.909343   27242 request.go:629] Waited for 194.383103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909408   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909416   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.909426   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.909432   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.912650   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.913346   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.913369   27242 pod_ready.go:81] duration metric: took 398.834831ms for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.913384   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.109258   27242 request.go:629] Waited for 195.812463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109335   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109344   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.109351   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.109360   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.113410   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.309479   27242 request.go:629] Waited for 195.262429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309542   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309551   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.309559   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.309563   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.313791   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.314385   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.314404   27242 pod_ready.go:81] duration metric: took 401.012331ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.314414   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.509531   27242 request.go:629] Waited for 195.056137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509605   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509611   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.509620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.509625   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.513357   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.709477   27242 request.go:629] Waited for 195.370636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709535   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709542   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.709553   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.709564   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.713345   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.713850   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.713874   27242 pod_ready.go:81] duration metric: took 399.45315ms for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.713889   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.909947   27242 request.go:629] Waited for 195.968544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910018   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.910030   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.910037   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.913897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.109846   27242 request.go:629] Waited for 195.376393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109896   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109901   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.109910   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.109916   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.113762   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.309532   27242 request.go:629] Waited for 95.294007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309604   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309616   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.309631   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.309641   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.313751   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.509885   27242 request.go:629] Waited for 195.399896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509978   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509991   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.510000   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.510009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.514418   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.714234   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.714255   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.714263   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.714266   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.717923   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.909739   27242 request.go:629] Waited for 191.248143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909790   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909795   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.909801   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.909804   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.916518   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:19.214106   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.214126   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.214134   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.214139   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.217700   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.309750   27242 request.go:629] Waited for 91.33378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309811   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309818   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.309827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.309832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.314568   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:19.714371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.714395   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.714403   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.714407   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.717735   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.718452   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.718468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.718475   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.718480   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.722349   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.722906   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:19.722923   27242 pod_ready.go:81] duration metric: took 2.009027669s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.722933   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.909367   27242 request.go:629] Waited for 186.370383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909471   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.909482   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.909487   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.913236   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.109762   27242 request.go:629] Waited for 195.344765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109853   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109861   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.109872   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.109883   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.114021   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.114608   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.114627   27242 pod_ready.go:81] duration metric: took 391.688117ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.114636   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.309372   27242 request.go:629] Waited for 194.665348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309436   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309446   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.309454   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.309462   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.313429   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.509612   27242 request.go:629] Waited for 195.389962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509670   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509676   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.509683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.509687   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.513278   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.513970   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.513988   27242 pod_ready.go:81] duration metric: took 399.344201ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.514002   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.710051   27242 request.go:629] Waited for 195.979482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710148   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710158   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.710166   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.710170   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.714583   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.909948   27242 request.go:629] Waited for 194.287257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910011   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.910018   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.910023   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.913833   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.914294   27242 pod_ready.go:92] pod "kube-proxy-stq26" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.914312   27242 pod_ready.go:81] duration metric: took 400.304119ms for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.914322   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.109389   27242 request.go:629] Waited for 194.990561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109469   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.109482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.109488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.114937   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:21.309870   27242 request.go:629] Waited for 194.409083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309938   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309944   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.309951   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.309956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.314789   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:21.315856   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.315905   27242 pod_ready.go:81] duration metric: took 401.575237ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.315918   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.509959   27242 request.go:629] Waited for 193.98282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510017   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.510033   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.510039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.513857   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.709794   27242 request.go:629] Waited for 195.374395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709856   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709863   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.709888   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.709893   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.713692   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.714469   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.714501   27242 pod_ready.go:81] duration metric: took 398.575885ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.714514   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.909971   27242 request.go:629] Waited for 195.381878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910060   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910068   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.910078   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.910085   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.914034   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.109540   27242 request.go:629] Waited for 194.902506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109621   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109629   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.109638   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.109644   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.113703   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.114348   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:22.114368   27242 pod_ready.go:81] duration metric: took 399.84796ms for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:22.114380   27242 pod_ready.go:38] duration metric: took 11.197545891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:22.114405   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:08:22.114465   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:08:22.132505   27242 api_server.go:72] duration metric: took 18.512751964s to wait for apiserver process to appear ...
	I0703 23:08:22.132533   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:08:22.132561   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:08:22.137340   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:08:22.137434   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:08:22.137445   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.137453   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.137457   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.138593   27242 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0703 23:08:22.138733   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:08:22.138758   27242 api_server.go:131] duration metric: took 6.217378ms to wait for apiserver health ...
	I0703 23:08:22.138774   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:08:22.309132   27242 request.go:629] Waited for 170.284558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309188   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.309200   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.309204   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.317229   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:22.325849   27242 system_pods.go:59] 24 kube-system pods found
	I0703 23:08:22.325890   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.325895   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.325899   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.325902   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.325906   27242 system_pods.go:61] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.325909   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.325912   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.325914   27242 system_pods.go:61] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.325917   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.325920   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.325924   27242 system_pods.go:61] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.325927   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.325930   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.325933   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.325936   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.325940   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.325943   27242 system_pods.go:61] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.325946   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.325949   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.325952   27242 system_pods.go:61] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.325954   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.325958   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.325960   27242 system_pods.go:61] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.325963   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.325970   27242 system_pods.go:74] duration metric: took 187.186303ms to wait for pod list to return data ...
	I0703 23:08:22.325985   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:08:22.509121   27242 request.go:629] Waited for 183.060695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509193   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509200   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.509210   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.509218   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.512726   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.512854   27242 default_sa.go:45] found service account: "default"
	I0703 23:08:22.512879   27242 default_sa.go:55] duration metric: took 186.885116ms for default service account to be created ...
	I0703 23:08:22.512891   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:08:22.709312   27242 request.go:629] Waited for 196.355099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709392   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709401   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.709415   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.709425   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.717218   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:22.725427   27242 system_pods.go:86] 24 kube-system pods found
	I0703 23:08:22.725459   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.725465   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.725470   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.725474   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.725478   27242 system_pods.go:89] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.725481   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.725485   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.725489   27242 system_pods.go:89] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.725494   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.725498   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.725502   27242 system_pods.go:89] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.725506   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.725510   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.725515   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.725519   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.725523   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.725526   27242 system_pods.go:89] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.725530   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.725535   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.725539   27242 system_pods.go:89] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.725546   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.725549   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.725552   27242 system_pods.go:89] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.725556   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.725561   27242 system_pods.go:126] duration metric: took 212.662262ms to wait for k8s-apps to be running ...
	I0703 23:08:22.725571   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:08:22.725617   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:08:22.742416   27242 system_svc.go:56] duration metric: took 16.833939ms WaitForService to wait for kubelet
	I0703 23:08:22.742456   27242 kubeadm.go:576] duration metric: took 19.122705878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:08:22.742497   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:08:22.909819   27242 request.go:629] Waited for 167.220159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.909886   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.909890   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.914023   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.915479   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915513   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915537   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915544   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915548   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915554   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915559   27242 node_conditions.go:105] duration metric: took 173.056283ms to run NodePressure ...
	I0703 23:08:22.915576   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:08:22.915610   27242 start.go:254] writing updated cluster config ...
	I0703 23:08:22.916020   27242 ssh_runner.go:195] Run: rm -f paused
	I0703 23:08:22.974944   27242 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 23:08:22.976700   27242 out.go:177] * Done! kubectl is now configured to use "ha-856893" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.489324905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048321489302324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e511ca02-208b-4a81-9f44-3de4c62c7e9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.489983390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8dccca2-e56e-4e94-a2ee-b5a483e9a525 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.490037477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8dccca2-e56e-4e94-a2ee-b5a483e9a525 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.490304356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8dccca2-e56e-4e94-a2ee-b5a483e9a525 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.531291767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=001213ee-8630-4f6d-bce1-d42d86ad7494 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.531372556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=001213ee-8630-4f6d-bce1-d42d86ad7494 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.532374004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8920619-b24a-483c-a3d5-635331ce374e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.532977050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048321532951084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8920619-b24a-483c-a3d5-635331ce374e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.533485071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7fb7f1e-716a-47b1-ad6d-30ecb1bfc772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.533546167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7fb7f1e-716a-47b1-ad6d-30ecb1bfc772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.533854463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7fb7f1e-716a-47b1-ad6d-30ecb1bfc772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.572952031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81c4fb24-8d3c-4026-a349-b8a65ed3db92 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.573026851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81c4fb24-8d3c-4026-a349-b8a65ed3db92 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.574207270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9906a3e-0dbc-4af6-9bf8-789b2eaf16fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.574674826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048321574650484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9906a3e-0dbc-4af6-9bf8-789b2eaf16fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.575167952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=210b44ca-cf74-4730-9804-f187ba7e7c3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.575220747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=210b44ca-cf74-4730-9804-f187ba7e7c3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.575448600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=210b44ca-cf74-4730-9804-f187ba7e7c3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.616207206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c8097e2-e8f0-499d-8e6b-0c7891799dbd name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.616296469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c8097e2-e8f0-499d-8e6b-0c7891799dbd name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.617613425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf3d9e77-6958-4df3-a4ad-c054f202dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.618197489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048321618174534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf3d9e77-6958-4df3-a4ad-c054f202dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.619012798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7dc41dc-8eaf-496d-911a-96486cf958bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.619083671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7dc41dc-8eaf-496d-911a-96486cf958bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:01 ha-856893 crio[680]: time="2024-07-03 23:12:01.619360826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7dc41dc-8eaf-496d-911a-96486cf958bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d5f2f09a864e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2add57c6feb6d       busybox-fc5497c4f-hh5rx
	4b327b3ea68a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   52adb03e9908b       coredns-7db6d8ff4d-n5tdf
	ebac8426f222e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   75824b8079291       coredns-7db6d8ff4d-pwqfl
	e5e953066d642       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b1df838b768ef       storage-provisioner
	aea86e5699e84       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   17315e93de095       kube-proxy-52zqj
	7a5bd1ae2892a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      6 minutes ago       Running             kindnet-cni               0                   fcb5b2ab8ad58       kindnet-h7ntk
	4c81f0becbc3b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   ade6e7c92cc82       kube-vip-ha-856893
	227a9a4176778       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   78f6147e8fcf3       kube-controller-manager-ha-856893
	8ed8443e8784d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   a50d015125505       kube-scheduler-ha-856893
	194253df10dfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   bbcc0c1ac6390       etcd-ha-856893
	4c379ddaf9a49       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   3f446507b3eb8       kube-apiserver-ha-856893
	
	
	==> coredns [4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54] <==
	[INFO] 10.244.0.4:50532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072272s
	[INFO] 10.244.0.4:38183 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100508s
	[INFO] 10.244.0.4:40014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049781s
	[INFO] 10.244.1.2:43357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134408s
	[INFO] 10.244.1.2:33336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000185s
	[INFO] 10.244.1.2:43589 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174137s
	[INFO] 10.244.1.2:49376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106729s
	[INFO] 10.244.1.2:51691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271033s
	[INFO] 10.244.2.2:40310 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117383s
	[INFO] 10.244.2.2:38408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011442s
	[INFO] 10.244.2.2:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080741s
	[INFO] 10.244.0.4:60751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020875s
	[INFO] 10.244.0.4:42746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083559s
	[INFO] 10.244.1.2:46618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026488s
	[INFO] 10.244.1.2:46816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095128s
	[INFO] 10.244.2.2:35755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141347s
	[INFO] 10.244.2.2:37226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000441904s
	[INFO] 10.244.2.2:56990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123934s
	[INFO] 10.244.0.4:33260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228783s
	[INFO] 10.244.0.4:40825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089557s
	[INFO] 10.244.0.4:36029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284159s
	[INFO] 10.244.0.4:38025 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069908s
	[INFO] 10.244.1.2:33505 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000516657s
	[INFO] 10.244.1.2:51760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106766s
	[INFO] 10.244.1.2:48924 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111713s
	
	
	==> coredns [ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41451 - 39576 "HINFO IN 3941637866052819197.8807026029404487185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013851694s
	[INFO] 10.244.2.2:52714 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014862182s
	[INFO] 10.244.0.4:48924 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001898144s
	[INFO] 10.244.1.2:38357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235864s
	[INFO] 10.244.1.2:52654 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000207162s
	[INFO] 10.244.2.2:38149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003994489s
	[INFO] 10.244.2.2:37323 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162805s
	[INFO] 10.244.2.2:37370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170597s
	[INFO] 10.244.0.4:39154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140397s
	[INFO] 10.244.0.4:39807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002148429s
	[INFO] 10.244.0.4:52421 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189952s
	[INFO] 10.244.0.4:32927 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001716905s
	[INFO] 10.244.0.4:37077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064503s
	[INFO] 10.244.1.2:53622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138056s
	[INFO] 10.244.1.2:56863 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001413025s
	[INFO] 10.244.1.2:33669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000289179s
	[INFO] 10.244.2.2:46390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141967s
	[INFO] 10.244.0.4:47937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126136s
	[INFO] 10.244.0.4:40258 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058689s
	[INFO] 10.244.1.2:34579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112137s
	[INFO] 10.244.1.2:43318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087441s
	[INFO] 10.244.2.2:44839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154015s
	[INFO] 10.244.1.2:49628 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158345s
	
	
	==> describe nodes <==
	Name:               ha-856893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-856893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26831b612bd459ca285f71afd0636da
	  System UUID:                a26831b6-12bd-459c-a285-f71afd0636da
	  Boot ID:                    60d1e076-9358-4d45-bf73-662df78ab1a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hh5rx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-n5tdf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m21s
	  kube-system                 coredns-7db6d8ff4d-pwqfl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m20s
	  kube-system                 etcd-ha-856893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 kindnet-h7ntk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m21s
	  kube-system                 kube-apiserver-ha-856893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-controller-manager-ha-856893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-proxy-52zqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-scheduler-ha-856893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-vip-ha-856893                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m41s (x7 over 6m41s)  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s                  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s                  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s                  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  NodeReady                5m49s                  kubelet          Node ha-856893 status is now: NodeReady
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	
	
	Name:               ha-856893-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:06:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:09:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-856893-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 109978f2ea4c4f42a5d187826750c850
	  System UUID:                109978f2-ea4c-4f42-a5d1-87826750c850
	  Boot ID:                    994539c8-7107-4cbf-a682-2c196e1b4de5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n7rvj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-856893-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m15s
	  kube-system                 kindnet-rwqsq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m17s
	  kube-system                 kube-apiserver-ha-856893-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-ha-856893-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-gkwrn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-ha-856893-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-vip-ha-856893-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-856893-m02 status is now: NodeNotReady
	
	
	Name:               ha-856893-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:07:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:08:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-856893-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1e4eaaaf3da41a390e7e93c4c9b6dd0
	  System UUID:                a1e4eaaa-f3da-41a3-90e7-e93c4c9b6dd0
	  Boot ID:                    714f8b3c-0219-40be-b96e-5e103d064c96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bt646                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 etcd-ha-856893-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m1s
	  kube-system                 kindnet-vtd2b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-ha-856893-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-ha-856893-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-stq26                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-ha-856893-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-vip-ha-856893-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m3s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m3s)  kubelet          Node ha-856893-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m3s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m                   node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  RegisteredNode           3m44s                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	
	
	Name:               ha-856893-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:11:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-856893-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3705f72ac66415f90e310971654b6b5
	  System UUID:                f3705f72-ac66-415f-90e3-10971654b6b5
	  Boot ID:                    b99153db-d083-4d53-8f7d-792d32c1186e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5kksq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m58s
	  kube-system                 kube-proxy-brfsv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m58s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m58s)  kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m58s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  NodeReady                2m48s                  kubelet          Node ha-856893-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 3 23:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050985] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.593398] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.343269] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jul 3 23:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.908066] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.058276] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065122] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.220079] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.126395] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.300940] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.506884] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.061467] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.368826] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.919640] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.254448] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +6.249182] kauditd_printk_skb: 23 callbacks suppressed
	[Jul 3 23:06] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.915119] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb] <==
	{"level":"warn","ts":"2024-07-03T23:12:01.895601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.899449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.90312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.910599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.916034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.923719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.934844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.940386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.943838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.95253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.960374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.967928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.973272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.978616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:01.999032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.004849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.013033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.014493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.03763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.045307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.058043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.077463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.087595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.095384Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:02.103872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:12:02 up 7 min,  0 users,  load average: 0.05, 0.17, 0.09
	Linux ha-856893 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71] <==
	I0703 23:11:22.544858       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:32.556309       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:32.556392       1 main.go:227] handling current node
	I0703 23:11:32.556416       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:32.556433       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:32.556566       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:32.556590       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:32.556656       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:32.556674       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:42.565479       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:42.565528       1 main.go:227] handling current node
	I0703 23:11:42.565539       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:42.565544       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:42.565649       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:42.565675       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:42.565718       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:42.565783       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:52.579240       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:52.579293       1 main.go:227] handling current node
	I0703 23:11:52.579307       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:52.579313       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:52.579448       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:52.579453       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:52.579500       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:52.579559       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112] <==
	I0703 23:05:27.803513       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:05:27.827801       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0703 23:05:27.842963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:05:40.487913       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0703 23:05:40.891672       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0703 23:06:47.177553       1 trace.go:236] Trace[1646404756]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d9eabe84-be40-4221-b01e-53771880f05a,client:192.168.39.157,api-group:,api-version:v1,name:kube-apiserver-ha-856893-m02,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02/status,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:PATCH (03-Jul-2024 23:06:46.675) (total time: 501ms):
	Trace[1646404756]: ["GuaranteedUpdate etcd3" audit-id:d9eabe84-be40-4221-b01e-53771880f05a,key:/pods/kube-system/kube-apiserver-ha-856893-m02,type:*core.Pod,resource:pods 501ms (23:06:46.675)
	Trace[1646404756]:  ---"Txn call completed" 498ms (23:06:47.176)]
	Trace[1646404756]: ---"Object stored in database" 499ms (23:06:47.176)
	Trace[1646404756]: [501.97274ms] [501.97274ms] END
	E0703 23:08:29.714542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55146: use of closed network connection
	E0703 23:08:29.907245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55158: use of closed network connection
	E0703 23:08:30.109154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55168: use of closed network connection
	E0703 23:08:30.308595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55182: use of closed network connection
	E0703 23:08:30.506637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55202: use of closed network connection
	E0703 23:08:30.710449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55214: use of closed network connection
	E0703 23:08:30.897088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55238: use of closed network connection
	E0703 23:08:31.115623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55252: use of closed network connection
	E0703 23:08:31.340432       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55272: use of closed network connection
	E0703 23:08:31.646395       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55278: use of closed network connection
	E0703 23:08:31.818268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43524: use of closed network connection
	E0703 23:08:32.008938       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43550: use of closed network connection
	E0703 23:08:32.189914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43564: use of closed network connection
	E0703 23:08:32.384321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43578: use of closed network connection
	E0703 23:08:32.569307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43590: use of closed network connection
	
	
	==> kube-controller-manager [227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e] <==
	I0703 23:07:59.982610       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:08:00.071837       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m03"
	I0703 23:08:23.943216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.815373ms"
	I0703 23:08:23.984267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.217424ms"
	I0703 23:08:24.186908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.578607ms"
	I0703 23:08:24.331407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.441291ms"
	I0703 23:08:24.387233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.761398ms"
	I0703 23:08:24.387349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.305µs"
	I0703 23:08:24.611572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.619µs"
	I0703 23:08:27.553339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.523436ms"
	I0703 23:08:27.553458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.361µs"
	I0703 23:08:28.204821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.616391ms"
	I0703 23:08:28.204953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.102µs"
	I0703 23:08:28.262137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.537027ms"
	I0703 23:08:28.262534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.007µs"
	I0703 23:08:29.243362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.267489ms"
	I0703 23:08:29.245302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.378µs"
	E0703 23:09:04.446073       1 certificate_controller.go:146] Sync csr-nzk25 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-nzk25": the object has been modified; please apply your changes to the latest version and try again
	I0703 23:09:04.725986       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-856893-m04\" does not exist"
	I0703 23:09:04.781392       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m04" podCIDRs=["10.244.3.0/24"]
	I0703 23:09:05.083864       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m04"
	I0703 23:09:14.677798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.604690       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.783473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.605177ms"
	I0703 23:10:08.783650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.542µs"
	
	
	==> kube-proxy [aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599] <==
	I0703 23:05:42.648241       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:05:42.660274       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	I0703 23:05:42.701292       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:05:42.701358       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:05:42.701376       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:05:42.704275       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:05:42.704524       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:05:42.704553       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:05:42.708143       1 config.go:192] "Starting service config controller"
	I0703 23:05:42.708177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:05:42.708224       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:05:42.708246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:05:42.708724       1 config.go:319] "Starting node config controller"
	I0703 23:05:42.708810       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:05:42.808474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:05:42.809810       1 shared_informer.go:320] Caches are synced for node config
	I0703 23:05:42.809889       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0] <==
	W0703 23:05:24.434535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:05:24.434550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:05:25.261863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.261999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.269112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:05:25.269265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:05:25.278628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:05:25.279108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:05:25.396201       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:05:25.396448       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:05:25.396683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:05:25.396721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:05:25.414377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:05:25.414670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:05:25.429406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.429583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.523495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.523643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.721665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 23:05:25.721726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0703 23:05:27.616231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:08:23.941598       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	E0703 23:08:23.941843       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4ffbc91d-86d2-4096-8592-d570ee95c514(default/busybox-fc5497c4f-bt646) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-bt646"
	E0703 23:08:23.941901       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" pod="default/busybox-fc5497c4f-bt646"
	I0703 23:08:23.941955       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	
	
	==> kubelet <==
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:07:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.915134    1363 topology_manager.go:215] "Topology Admit Handler" podUID="1e907d89-dcf0-4e2d-bf2d-812d38932e86" podNamespace="default" podName="busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.944135    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7w4\" (UniqueName: \"kubernetes.io/projected/1e907d89-dcf0-4e2d-bf2d-812d38932e86-kube-api-access-5b7w4\") pod \"busybox-fc5497c4f-hh5rx\" (UID: \"1e907d89-dcf0-4e2d-bf2d-812d38932e86\") " pod="default/busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:27 ha-856893 kubelet[1363]: E0703 23:08:27.752219    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:09:27 ha-856893 kubelet[1363]: E0703 23:09:27.751305    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:10:27 ha-856893 kubelet[1363]: E0703 23:10:27.755235    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:11:27 ha-856893 kubelet[1363]: E0703 23:11:27.756589    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-856893 -n ha-856893
helpers_test.go:261: (dbg) Run:  kubectl --context ha-856893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr: (3.809251992s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-856893 -n ha-856893
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 logs -n 25: (1.497004441s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m03_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m04 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp testdata/cp-test.txt                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m03 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-856893 node stop m02 -v=7                                                    | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-856893 node start m02 -v=7                                                   | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:04:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:04:49.303938   27242 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:04:49.304205   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304217   27242 out.go:304] Setting ErrFile to fd 2...
	I0703 23:04:49.304221   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:49.304418   27242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:04:49.304993   27242 out.go:298] Setting JSON to false
	I0703 23:04:49.305930   27242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2829,"bootTime":1720045060,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:04:49.305987   27242 start.go:139] virtualization: kvm guest
	I0703 23:04:49.308231   27242 out.go:177] * [ha-856893] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:04:49.309607   27242 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:04:49.309635   27242 notify.go:220] Checking for updates...
	I0703 23:04:49.312119   27242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:04:49.313313   27242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:04:49.314518   27242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.315705   27242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:04:49.316858   27242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:04:49.318260   27242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:04:49.353555   27242 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:04:49.354873   27242 start.go:297] selected driver: kvm2
	I0703 23:04:49.354888   27242 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:04:49.354902   27242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:04:49.355866   27242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.355965   27242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:04:49.371321   27242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:04:49.371369   27242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:04:49.371558   27242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:04:49.371586   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:04:49.371590   27242 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0703 23:04:49.371596   27242 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 23:04:49.371647   27242 start.go:340] cluster config:
	{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0703 23:04:49.371752   27242 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:04:49.373469   27242 out.go:177] * Starting "ha-856893" primary control-plane node in "ha-856893" cluster
	I0703 23:04:49.374783   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:04:49.374822   27242 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:04:49.374831   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:04:49.374914   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:04:49.374925   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:04:49.375209   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:04:49.375227   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json: {Name:mkf45f45e81b9e1937bda66f4e2b577ad75b58d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:04:49.375355   27242 start.go:360] acquireMachinesLock for ha-856893: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:04:49.375381   27242 start.go:364] duration metric: took 13.613µs to acquireMachinesLock for "ha-856893"
	I0703 23:04:49.375397   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:04:49.375447   27242 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:04:49.377146   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:04:49.377284   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:49.377347   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:49.391658   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0703 23:04:49.392204   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:49.392806   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:04:49.392829   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:49.393132   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:49.393315   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:04:49.393456   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:04:49.393665   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:04:49.393703   27242 client.go:168] LocalClient.Create starting
	I0703 23:04:49.393738   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:04:49.393776   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393790   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393832   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:04:49.393849   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:04:49.393861   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:04:49.393879   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:04:49.393887   27242 main.go:141] libmachine: (ha-856893) Calling .PreCreateCheck
	I0703 23:04:49.394261   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:04:49.394643   27242 main.go:141] libmachine: Creating machine...
	I0703 23:04:49.394655   27242 main.go:141] libmachine: (ha-856893) Calling .Create
	I0703 23:04:49.394757   27242 main.go:141] libmachine: (ha-856893) Creating KVM machine...
	I0703 23:04:49.395897   27242 main.go:141] libmachine: (ha-856893) DBG | found existing default KVM network
	I0703 23:04:49.396588   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.396439   27265 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0703 23:04:49.396611   27242 main.go:141] libmachine: (ha-856893) DBG | created network xml: 
	I0703 23:04:49.396624   27242 main.go:141] libmachine: (ha-856893) DBG | <network>
	I0703 23:04:49.396638   27242 main.go:141] libmachine: (ha-856893) DBG |   <name>mk-ha-856893</name>
	I0703 23:04:49.396648   27242 main.go:141] libmachine: (ha-856893) DBG |   <dns enable='no'/>
	I0703 23:04:49.396658   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396672   27242 main.go:141] libmachine: (ha-856893) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 23:04:49.396682   27242 main.go:141] libmachine: (ha-856893) DBG |     <dhcp>
	I0703 23:04:49.396695   27242 main.go:141] libmachine: (ha-856893) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 23:04:49.396705   27242 main.go:141] libmachine: (ha-856893) DBG |     </dhcp>
	I0703 23:04:49.396713   27242 main.go:141] libmachine: (ha-856893) DBG |   </ip>
	I0703 23:04:49.396722   27242 main.go:141] libmachine: (ha-856893) DBG |   
	I0703 23:04:49.396747   27242 main.go:141] libmachine: (ha-856893) DBG | </network>
	I0703 23:04:49.396767   27242 main.go:141] libmachine: (ha-856893) DBG | 
	I0703 23:04:49.401937   27242 main.go:141] libmachine: (ha-856893) DBG | trying to create private KVM network mk-ha-856893 192.168.39.0/24...
	I0703 23:04:49.466045   27242 main.go:141] libmachine: (ha-856893) DBG | private KVM network mk-ha-856893 192.168.39.0/24 created
	I0703 23:04:49.466078   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.465979   27265 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.466090   27242 main.go:141] libmachine: (ha-856893) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.466112   27242 main.go:141] libmachine: (ha-856893) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:04:49.466139   27242 main.go:141] libmachine: (ha-856893) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:04:49.697240   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.697136   27265 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa...
	I0703 23:04:49.882712   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882599   27265 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk...
	I0703 23:04:49.882738   27242 main.go:141] libmachine: (ha-856893) DBG | Writing magic tar header
	I0703 23:04:49.882748   27242 main.go:141] libmachine: (ha-856893) DBG | Writing SSH key tar header
	I0703 23:04:49.882772   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:49.882735   27265 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 ...
	I0703 23:04:49.882887   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893
	I0703 23:04:49.882920   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893 (perms=drwx------)
	I0703 23:04:49.882933   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:04:49.882948   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:49.882958   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:04:49.882966   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:04:49.882975   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:04:49.882984   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:04:49.882994   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:04:49.882999   27242 main.go:141] libmachine: (ha-856893) DBG | Checking permissions on dir: /home
	I0703 23:04:49.883009   27242 main.go:141] libmachine: (ha-856893) DBG | Skipping /home - not owner
	I0703 23:04:49.883025   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:04:49.883039   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:04:49.883051   27242 main.go:141] libmachine: (ha-856893) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:04:49.883062   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:49.884190   27242 main.go:141] libmachine: (ha-856893) define libvirt domain using xml: 
	I0703 23:04:49.884219   27242 main.go:141] libmachine: (ha-856893) <domain type='kvm'>
	I0703 23:04:49.884229   27242 main.go:141] libmachine: (ha-856893)   <name>ha-856893</name>
	I0703 23:04:49.884242   27242 main.go:141] libmachine: (ha-856893)   <memory unit='MiB'>2200</memory>
	I0703 23:04:49.884251   27242 main.go:141] libmachine: (ha-856893)   <vcpu>2</vcpu>
	I0703 23:04:49.884257   27242 main.go:141] libmachine: (ha-856893)   <features>
	I0703 23:04:49.884266   27242 main.go:141] libmachine: (ha-856893)     <acpi/>
	I0703 23:04:49.884273   27242 main.go:141] libmachine: (ha-856893)     <apic/>
	I0703 23:04:49.884284   27242 main.go:141] libmachine: (ha-856893)     <pae/>
	I0703 23:04:49.884302   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884313   27242 main.go:141] libmachine: (ha-856893)   </features>
	I0703 23:04:49.884325   27242 main.go:141] libmachine: (ha-856893)   <cpu mode='host-passthrough'>
	I0703 23:04:49.884337   27242 main.go:141] libmachine: (ha-856893)   
	I0703 23:04:49.884343   27242 main.go:141] libmachine: (ha-856893)   </cpu>
	I0703 23:04:49.884354   27242 main.go:141] libmachine: (ha-856893)   <os>
	I0703 23:04:49.884364   27242 main.go:141] libmachine: (ha-856893)     <type>hvm</type>
	I0703 23:04:49.884374   27242 main.go:141] libmachine: (ha-856893)     <boot dev='cdrom'/>
	I0703 23:04:49.884383   27242 main.go:141] libmachine: (ha-856893)     <boot dev='hd'/>
	I0703 23:04:49.884394   27242 main.go:141] libmachine: (ha-856893)     <bootmenu enable='no'/>
	I0703 23:04:49.884406   27242 main.go:141] libmachine: (ha-856893)   </os>
	I0703 23:04:49.884433   27242 main.go:141] libmachine: (ha-856893)   <devices>
	I0703 23:04:49.884459   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='cdrom'>
	I0703 23:04:49.884478   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/boot2docker.iso'/>
	I0703 23:04:49.884490   27242 main.go:141] libmachine: (ha-856893)       <target dev='hdc' bus='scsi'/>
	I0703 23:04:49.884520   27242 main.go:141] libmachine: (ha-856893)       <readonly/>
	I0703 23:04:49.884539   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884550   27242 main.go:141] libmachine: (ha-856893)     <disk type='file' device='disk'>
	I0703 23:04:49.884564   27242 main.go:141] libmachine: (ha-856893)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:04:49.884581   27242 main.go:141] libmachine: (ha-856893)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/ha-856893.rawdisk'/>
	I0703 23:04:49.884592   27242 main.go:141] libmachine: (ha-856893)       <target dev='hda' bus='virtio'/>
	I0703 23:04:49.884605   27242 main.go:141] libmachine: (ha-856893)     </disk>
	I0703 23:04:49.884623   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884635   27242 main.go:141] libmachine: (ha-856893)       <source network='mk-ha-856893'/>
	I0703 23:04:49.884644   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884657   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884668   27242 main.go:141] libmachine: (ha-856893)     <interface type='network'>
	I0703 23:04:49.884679   27242 main.go:141] libmachine: (ha-856893)       <source network='default'/>
	I0703 23:04:49.884694   27242 main.go:141] libmachine: (ha-856893)       <model type='virtio'/>
	I0703 23:04:49.884705   27242 main.go:141] libmachine: (ha-856893)     </interface>
	I0703 23:04:49.884715   27242 main.go:141] libmachine: (ha-856893)     <serial type='pty'>
	I0703 23:04:49.884736   27242 main.go:141] libmachine: (ha-856893)       <target port='0'/>
	I0703 23:04:49.884745   27242 main.go:141] libmachine: (ha-856893)     </serial>
	I0703 23:04:49.884761   27242 main.go:141] libmachine: (ha-856893)     <console type='pty'>
	I0703 23:04:49.884777   27242 main.go:141] libmachine: (ha-856893)       <target type='serial' port='0'/>
	I0703 23:04:49.884789   27242 main.go:141] libmachine: (ha-856893)     </console>
	I0703 23:04:49.884799   27242 main.go:141] libmachine: (ha-856893)     <rng model='virtio'>
	I0703 23:04:49.884810   27242 main.go:141] libmachine: (ha-856893)       <backend model='random'>/dev/random</backend>
	I0703 23:04:49.884819   27242 main.go:141] libmachine: (ha-856893)     </rng>
	I0703 23:04:49.884831   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884838   27242 main.go:141] libmachine: (ha-856893)     
	I0703 23:04:49.884855   27242 main.go:141] libmachine: (ha-856893)   </devices>
	I0703 23:04:49.884874   27242 main.go:141] libmachine: (ha-856893) </domain>
	I0703 23:04:49.884887   27242 main.go:141] libmachine: (ha-856893) 
	I0703 23:04:49.889408   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:7f:ab:67 in network default
	I0703 23:04:49.890000   27242 main.go:141] libmachine: (ha-856893) Ensuring networks are active...
	I0703 23:04:49.890020   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:49.890827   27242 main.go:141] libmachine: (ha-856893) Ensuring network default is active
	I0703 23:04:49.891173   27242 main.go:141] libmachine: (ha-856893) Ensuring network mk-ha-856893 is active
	I0703 23:04:49.891707   27242 main.go:141] libmachine: (ha-856893) Getting domain xml...
	I0703 23:04:49.892417   27242 main.go:141] libmachine: (ha-856893) Creating domain...
	I0703 23:04:51.076607   27242 main.go:141] libmachine: (ha-856893) Waiting to get IP...
	I0703 23:04:51.077509   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.077950   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.078001   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.077954   27265 retry.go:31] will retry after 279.728515ms: waiting for machine to come up
	I0703 23:04:51.359420   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.359916   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.359951   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.359884   27265 retry.go:31] will retry after 247.648785ms: waiting for machine to come up
	I0703 23:04:51.609238   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:51.609581   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:51.609605   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:51.609536   27265 retry.go:31] will retry after 462.632413ms: waiting for machine to come up
	I0703 23:04:52.074013   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.074458   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.074495   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.074436   27265 retry.go:31] will retry after 535.361005ms: waiting for machine to come up
	I0703 23:04:52.611006   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:52.611471   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:52.611499   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:52.611417   27265 retry.go:31] will retry after 566.856393ms: waiting for machine to come up
	I0703 23:04:53.180116   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:53.180549   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:53.180572   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:53.180514   27265 retry.go:31] will retry after 893.437933ms: waiting for machine to come up
	I0703 23:04:54.075051   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:54.075493   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:54.075541   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:54.075436   27265 retry.go:31] will retry after 1.153111216s: waiting for machine to come up
	I0703 23:04:55.229683   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:55.230080   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:55.230099   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:55.230058   27265 retry.go:31] will retry after 1.209590198s: waiting for machine to come up
	I0703 23:04:56.441430   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:56.441787   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:56.441815   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:56.441765   27265 retry.go:31] will retry after 1.140725525s: waiting for machine to come up
	I0703 23:04:57.583965   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:57.584360   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:57.584387   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:57.584309   27265 retry.go:31] will retry after 2.005681822s: waiting for machine to come up
	I0703 23:04:59.591365   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:04:59.591779   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:04:59.591807   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:04:59.591747   27265 retry.go:31] will retry after 2.709221348s: waiting for machine to come up
	I0703 23:05:02.304438   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:02.304759   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:02.304799   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:02.304723   27265 retry.go:31] will retry after 3.359635089s: waiting for machine to come up
	I0703 23:05:05.666017   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:05.666403   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find current IP address of domain ha-856893 in network mk-ha-856893
	I0703 23:05:05.666432   27242 main.go:141] libmachine: (ha-856893) DBG | I0703 23:05:05.666364   27265 retry.go:31] will retry after 3.83770662s: waiting for machine to come up
	I0703 23:05:09.505078   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505551   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.505566   27242 main.go:141] libmachine: (ha-856893) Found IP for machine: 192.168.39.172
	I0703 23:05:09.505579   27242 main.go:141] libmachine: (ha-856893) Reserving static IP address...
	I0703 23:05:09.505883   27242 main.go:141] libmachine: (ha-856893) DBG | unable to find host DHCP lease matching {name: "ha-856893", mac: "52:54:00:f8:43:23", ip: "192.168.39.172"} in network mk-ha-856893
	I0703 23:05:09.585944   27242 main.go:141] libmachine: (ha-856893) DBG | Getting to WaitForSSH function...
	I0703 23:05:09.585974   27242 main.go:141] libmachine: (ha-856893) Reserved static IP address: 192.168.39.172
	I0703 23:05:09.585992   27242 main.go:141] libmachine: (ha-856893) Waiting for SSH to be available...
	I0703 23:05:09.588555   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589004   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.589032   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.589229   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH client type: external
	I0703 23:05:09.589251   27242 main.go:141] libmachine: (ha-856893) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa (-rw-------)
	I0703 23:05:09.589277   27242 main.go:141] libmachine: (ha-856893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:05:09.589292   27242 main.go:141] libmachine: (ha-856893) DBG | About to run SSH command:
	I0703 23:05:09.589321   27242 main.go:141] libmachine: (ha-856893) DBG | exit 0
	I0703 23:05:09.716024   27242 main.go:141] libmachine: (ha-856893) DBG | SSH cmd err, output: <nil>: 
	I0703 23:05:09.716309   27242 main.go:141] libmachine: (ha-856893) KVM machine creation complete!
	I0703 23:05:09.716633   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:09.717150   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717368   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:09.717544   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:05:09.717558   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:09.718761   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:05:09.718778   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:05:09.718786   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:05:09.718793   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.720891   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721227   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.721246   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.721398   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.721581   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721736   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.721884   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.722050   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.722255   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.722270   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:05:09.827380   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:09.827404   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:05:09.827412   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.830421   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830736   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.830762   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.830957   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.831181   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831359   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.831522   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.831674   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.831845   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.831858   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:05:09.940700   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:05:09.940805   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:05:09.940820   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:05:09.940836   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941067   27242 buildroot.go:166] provisioning hostname "ha-856893"
	I0703 23:05:09.941088   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:09.941282   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:09.943686   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944069   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:09.944095   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:09.944257   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:09.944455   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944603   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:09.944740   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:09.944877   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:09.945060   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:09.945071   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893 && echo "ha-856893" | sudo tee /etc/hostname
	I0703 23:05:10.067286   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:05:10.067311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.069961   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070287   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.070308   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.070498   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.070682   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.070896   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.071050   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.071212   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.071414   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.071431   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:05:10.189893   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:05:10.189928   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:05:10.189959   27242 buildroot.go:174] setting up certificates
	I0703 23:05:10.189968   27242 provision.go:84] configureAuth start
	I0703 23:05:10.189976   27242 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:05:10.190275   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:10.193226   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193602   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.193625   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.193795   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.195779   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196097   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.196119   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.196195   27242 provision.go:143] copyHostCerts
	I0703 23:05:10.196234   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196277   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:05:10.196304   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:05:10.196383   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:05:10.196499   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196528   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:05:10.196537   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:05:10.196576   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:05:10.196682   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196702   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:05:10.196708   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:05:10.196732   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:05:10.196780   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893 san=[127.0.0.1 192.168.39.172 ha-856893 localhost minikube]
	I0703 23:05:10.449385   27242 provision.go:177] copyRemoteCerts
	I0703 23:05:10.449453   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:05:10.449480   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.452086   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452311   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.452338   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.452543   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.452743   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.452885   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.452991   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.538502   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:05:10.538569   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:05:10.565459   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:05:10.565517   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:05:10.591713   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:05:10.591782   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0703 23:05:10.620534   27242 provision.go:87] duration metric: took 430.554362ms to configureAuth
	I0703 23:05:10.620571   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:05:10.620750   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:10.620845   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.623353   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623771   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.623799   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.623935   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.624152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624325   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.624439   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.624606   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:10.624765   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:10.624779   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:05:10.904599   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:05:10.904631   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:05:10.904641   27242 main.go:141] libmachine: (ha-856893) Calling .GetURL
	I0703 23:05:10.905870   27242 main.go:141] libmachine: (ha-856893) DBG | Using libvirt version 6000000
	I0703 23:05:10.907791   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908127   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.908151   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.908372   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:05:10.908390   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:05:10.908398   27242 client.go:171] duration metric: took 21.514686715s to LocalClient.Create
	I0703 23:05:10.908429   27242 start.go:167] duration metric: took 21.514763646s to libmachine.API.Create "ha-856893"
	I0703 23:05:10.908441   27242 start.go:293] postStartSetup for "ha-856893" (driver="kvm2")
	I0703 23:05:10.908451   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:05:10.908484   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:10.908725   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:05:10.908748   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:10.910851   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911184   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:10.911225   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:10.911349   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:10.911538   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:10.911687   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:10.911796   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:10.994829   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:05:10.999699   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:05:10.999723   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:05:10.999787   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:05:10.999867   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:05:10.999903   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:05:11.000007   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:05:11.010870   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:11.041611   27242 start.go:296] duration metric: took 133.157203ms for postStartSetup
	I0703 23:05:11.041689   27242 main.go:141] libmachine: (ha-856893) Calling .GetConfigRaw
	I0703 23:05:11.042230   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.045028   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045417   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.045449   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.045801   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:11.046044   27242 start.go:128] duration metric: took 21.670585889s to createHost
	I0703 23:05:11.046071   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.048601   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.048906   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.048929   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.049092   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.049289   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.049641   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.049848   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:05:11.050029   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:05:11.050041   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:05:11.156804   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047911.130080211
	
	I0703 23:05:11.156825   27242 fix.go:216] guest clock: 1720047911.130080211
	I0703 23:05:11.156833   27242 fix.go:229] Guest: 2024-07-03 23:05:11.130080211 +0000 UTC Remote: 2024-07-03 23:05:11.046058241 +0000 UTC m=+21.776314180 (delta=84.02197ms)
	I0703 23:05:11.156877   27242 fix.go:200] guest clock delta is within tolerance: 84.02197ms
	I0703 23:05:11.156884   27242 start.go:83] releasing machines lock for "ha-856893", held for 21.781493772s
	I0703 23:05:11.156910   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.157171   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:11.159661   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.159989   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.160008   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.160187   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160682   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160849   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:11.160925   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:05:11.160975   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.161091   27242 ssh_runner.go:195] Run: cat /version.json
	I0703 23:05:11.161115   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:11.163570   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163644   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.163933   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163969   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:11.163996   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164083   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:11.164233   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164361   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:11.164513   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165190   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:11.165203   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165445   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:11.165456   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.165594   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:11.264903   27242 ssh_runner.go:195] Run: systemctl --version
	I0703 23:05:11.271362   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:05:11.431766   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:05:11.437888   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:05:11.437960   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:05:11.456204   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:05:11.456228   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:05:11.456282   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:05:11.478288   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:05:11.496504   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:05:11.496546   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:05:11.513312   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:05:11.529272   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:05:11.651791   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:05:11.833740   27242 docker.go:233] disabling docker service ...
	I0703 23:05:11.833798   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:05:11.850082   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:05:11.864945   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:05:11.993322   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:05:12.121368   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:05:12.136604   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:05:12.156727   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:05:12.156790   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.168812   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:05:12.168881   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.181117   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.193084   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.204859   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:05:12.217389   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.229489   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.248248   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:05:12.260054   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:05:12.270988   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:05:12.271050   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:05:12.285900   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:05:12.296588   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:12.421931   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:05:12.567694   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:05:12.567771   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:05:12.573160   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:05:12.573227   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:05:12.577204   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:05:12.618785   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:05:12.618858   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.648983   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:05:12.680410   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:05:12.681677   27242 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:05:12.684268   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684586   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:12.684615   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:12.684826   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:05:12.689291   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:12.702754   27242 kubeadm.go:877] updating cluster {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:05:12.702853   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:12.702897   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:12.737089   27242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 23:05:12.737156   27242 ssh_runner.go:195] Run: which lz4
	I0703 23:05:12.741174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0703 23:05:12.741275   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 23:05:12.745594   27242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:05:12.745632   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 23:05:14.273244   27242 crio.go:462] duration metric: took 1.531990406s to copy over tarball
	I0703 23:05:14.273329   27242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:05:16.532872   27242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.259515995s)
	I0703 23:05:16.532901   27242 crio.go:469] duration metric: took 2.259629155s to extract the tarball
	I0703 23:05:16.532912   27242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:05:16.571634   27242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:05:16.617842   27242 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:05:16.617868   27242 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:05:16.617876   27242 kubeadm.go:928] updating node { 192.168.39.172 8443 v1.30.2 crio true true} ...
	I0703 23:05:16.617964   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:05:16.618023   27242 ssh_runner.go:195] Run: crio config
	I0703 23:05:16.664162   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:16.664181   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:16.664189   27242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:05:16.664210   27242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-856893 NodeName:ha-856893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:05:16.664387   27242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-856893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:05:16.664413   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:05:16.664474   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:05:16.682379   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:05:16.682508   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:05:16.682575   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:05:16.693673   27242 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:05:16.693753   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0703 23:05:16.704380   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0703 23:05:16.722634   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:05:16.740879   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0703 23:05:16.759081   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0703 23:05:16.777539   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:05:16.781905   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:05:16.795594   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:05:16.932173   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:05:16.960438   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.172
	I0703 23:05:16.960457   27242 certs.go:194] generating shared ca certs ...
	I0703 23:05:16.960471   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:16.960625   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:05:16.960687   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:05:16.960701   27242 certs.go:256] generating profile certs ...
	I0703 23:05:16.960769   27242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:05:16.960789   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt with IP's: []
	I0703 23:05:17.180299   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt ...
	I0703 23:05:17.180327   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt: {Name:mked142f33e96cc69e07cbef413ceae8eaadb6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180495   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key ...
	I0703 23:05:17.180505   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key: {Name:mkda59ba7700af447f9573712b80d771070e40e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.180580   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89
	I0703 23:05:17.180594   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.254]
	I0703 23:05:17.268855   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 ...
	I0703 23:05:17.268884   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89: {Name:mk564c544d24be22e8d81f70b99af5878e84b732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269036   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 ...
	I0703 23:05:17.269054   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89: {Name:mk2b21d824f1f5ef781a1bb28b7c84b56246aa84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.269126   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:05:17.269222   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.7d0f4c89 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:05:17.269280   27242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:05:17.269296   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt with IP's: []
	I0703 23:05:17.337820   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt ...
	I0703 23:05:17.337850   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt: {Name:mk56d081fd7b738fa50b488ebdec0c915931f1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338007   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key ...
	I0703 23:05:17.338017   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key: {Name:mk1bfcc2bc169c4499f89205b355a5beb44be061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:17.338083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:05:17.338101   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:05:17.338111   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:05:17.338124   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:05:17.338136   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:05:17.338155   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:05:17.338167   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:05:17.338184   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:05:17.338228   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:05:17.338258   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:05:17.338267   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:05:17.338290   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:05:17.338309   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:05:17.338334   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:05:17.338368   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:05:17.338396   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.338409   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.338422   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.338943   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:05:17.367294   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:05:17.394625   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:05:17.421449   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:05:17.448364   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 23:05:17.478967   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:05:17.507381   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:05:17.535692   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:05:17.564746   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:05:17.592808   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:05:17.620310   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:05:17.648069   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:05:17.666458   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:05:17.673016   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:05:17.685065   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690329   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.690403   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:05:17.696993   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:05:17.709145   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:05:17.721321   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726475   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.726555   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:05:17.732930   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:05:17.744956   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:05:17.759349   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769931   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.769997   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:05:17.777908   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:05:17.793803   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:05:17.798683   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:05:17.798746   27242 kubeadm.go:391] StartCluster: {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:05:17.798856   27242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:05:17.798950   27242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:05:17.857895   27242 cri.go:89] found id: ""
	I0703 23:05:17.857958   27242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:05:17.869751   27242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:05:17.881191   27242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:05:17.892752   27242 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:05:17.892774   27242 kubeadm.go:156] found existing configuration files:
	
	I0703 23:05:17.892815   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:05:17.904127   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:05:17.904196   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:05:17.916159   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:05:17.927292   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:05:17.927363   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:05:17.938640   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.949163   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:05:17.949218   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:05:17.960636   27242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:05:17.971220   27242 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:05:17.971276   27242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:05:17.982313   27242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:05:18.243554   27242 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:05:28.408397   27242 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 23:05:28.408485   27242 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:05:28.408605   27242 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:05:28.408745   27242 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:05:28.408866   27242 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:05:28.408942   27242 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:05:28.410573   27242 out.go:204]   - Generating certificates and keys ...
	I0703 23:05:28.410647   27242 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:05:28.410731   27242 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:05:28.410801   27242 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:05:28.410850   27242 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:05:28.410900   27242 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:05:28.410954   27242 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:05:28.411002   27242 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:05:28.411118   27242 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411163   27242 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:05:28.411315   27242 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-856893 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0703 23:05:28.411421   27242 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:05:28.411509   27242 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:05:28.411572   27242 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:05:28.411648   27242 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:05:28.411722   27242 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:05:28.411796   27242 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 23:05:28.411892   27242 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:05:28.411981   27242 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:05:28.412064   27242 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:05:28.412191   27242 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:05:28.412266   27242 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:05:28.413911   27242 out.go:204]   - Booting up control plane ...
	I0703 23:05:28.414019   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:05:28.414100   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:05:28.414173   27242 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:05:28.414325   27242 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:05:28.414456   27242 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:05:28.414501   27242 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:05:28.414606   27242 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 23:05:28.414662   27242 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 23:05:28.414710   27242 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.527133ms
	I0703 23:05:28.414781   27242 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 23:05:28.414827   27242 kubeadm.go:309] [api-check] The API server is healthy after 6.123038103s
	I0703 23:05:28.414915   27242 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 23:05:28.415058   27242 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 23:05:28.415150   27242 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 23:05:28.415339   27242 kubeadm.go:309] [mark-control-plane] Marking the node ha-856893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 23:05:28.415422   27242 kubeadm.go:309] [bootstrap-token] Using token: 12qvkr.qb869phsnq1wz0rf
	I0703 23:05:28.416767   27242 out.go:204]   - Configuring RBAC rules ...
	I0703 23:05:28.416884   27242 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 23:05:28.416965   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 23:05:28.417123   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 23:05:28.417274   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 23:05:28.417401   27242 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 23:05:28.417511   27242 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 23:05:28.417640   27242 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 23:05:28.417710   27242 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 23:05:28.417779   27242 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 23:05:28.417788   27242 kubeadm.go:309] 
	I0703 23:05:28.417861   27242 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 23:05:28.417870   27242 kubeadm.go:309] 
	I0703 23:05:28.417956   27242 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 23:05:28.417970   27242 kubeadm.go:309] 
	I0703 23:05:28.418024   27242 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 23:05:28.418077   27242 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 23:05:28.418120   27242 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 23:05:28.418126   27242 kubeadm.go:309] 
	I0703 23:05:28.418170   27242 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 23:05:28.418175   27242 kubeadm.go:309] 
	I0703 23:05:28.418218   27242 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 23:05:28.418224   27242 kubeadm.go:309] 
	I0703 23:05:28.418276   27242 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 23:05:28.418364   27242 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 23:05:28.418464   27242 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 23:05:28.418474   27242 kubeadm.go:309] 
	I0703 23:05:28.418584   27242 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 23:05:28.418691   27242 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 23:05:28.418700   27242 kubeadm.go:309] 
	I0703 23:05:28.418808   27242 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.418931   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 23:05:28.418963   27242 kubeadm.go:309] 	--control-plane 
	I0703 23:05:28.418970   27242 kubeadm.go:309] 
	I0703 23:05:28.419071   27242 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 23:05:28.419080   27242 kubeadm.go:309] 
	I0703 23:05:28.419141   27242 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 12qvkr.qb869phsnq1wz0rf \
	I0703 23:05:28.419289   27242 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 23:05:28.419304   27242 cni.go:84] Creating CNI manager for ""
	I0703 23:05:28.419312   27242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 23:05:28.420892   27242 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0703 23:05:28.422220   27242 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0703 23:05:28.428330   27242 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0703 23:05:28.428351   27242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0703 23:05:28.449233   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0703 23:05:28.863177   27242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 23:05:28.863315   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:28.863314   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893 minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=true
	I0703 23:05:28.927963   27242 ops.go:34] apiserver oom_adj: -16
	I0703 23:05:29.030917   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:29.531769   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.031402   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:30.531013   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.031167   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:31.531765   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.031213   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:32.531657   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.031757   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:33.531759   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.031901   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:34.531406   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.032024   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:35.531604   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.031112   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:36.531193   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.031109   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:37.531156   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.031136   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:38.531321   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.031594   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:39.531996   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.031087   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0703 23:05:40.157208   27242 kubeadm.go:1107] duration metric: took 11.293952239s to wait for elevateKubeSystemPrivileges
	W0703 23:05:40.157241   27242 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0703 23:05:40.157249   27242 kubeadm.go:393] duration metric: took 22.358506374s to StartCluster
	I0703 23:05:40.157267   27242 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.157330   27242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.157993   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:05:40.158199   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0703 23:05:40.158198   27242 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:40.158313   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:05:40.158221   27242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 23:05:40.158334   27242 addons.go:69] Setting storage-provisioner=true in profile "ha-856893"
	I0703 23:05:40.158356   27242 addons.go:234] Setting addon storage-provisioner=true in "ha-856893"
	I0703 23:05:40.158384   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.158405   27242 addons.go:69] Setting default-storageclass=true in profile "ha-856893"
	I0703 23:05:40.158434   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:40.158449   27242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-856893"
	I0703 23:05:40.158795   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158820   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.158913   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.158949   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.173903   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0703 23:05:40.174071   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0703 23:05:40.174340   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174543   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.174803   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.174833   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175065   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.175086   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.175156   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175396   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.175549   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.175675   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.175698   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.177715   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:05:40.177916   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0703 23:05:40.178324   27242 cert_rotation.go:137] Starting client certificate rotation controller
	I0703 23:05:40.178475   27242 addons.go:234] Setting addon default-storageclass=true in "ha-856893"
	I0703 23:05:40.178516   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:05:40.178892   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.178922   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.191846   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0703 23:05:40.192316   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.192861   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.192886   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.193260   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.193465   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.194323   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0703 23:05:40.194798   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.195263   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.195279   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.195308   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.195583   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.196026   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:40.196053   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:40.197291   27242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:05:40.198820   27242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.198841   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 23:05:40.198867   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.202098   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202535   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.202559   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.202726   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.202940   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.203083   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.203211   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.211653   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0703 23:05:40.212071   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:40.212561   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:40.212584   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:40.212866   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:40.213033   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:05:40.214663   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:05:40.214886   27242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.214899   27242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 23:05:40.214912   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:05:40.217534   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.217883   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:05:40.217908   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:05:40.218063   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:05:40.218258   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:05:40.218411   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:05:40.218546   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:05:40.267153   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0703 23:05:40.358079   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 23:05:40.358732   27242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 23:05:40.781574   27242 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0703 23:05:41.167935   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.167961   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168003   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168024   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168442   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168453   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168444   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168463   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168467   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168491   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168500   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168507   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168472   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.168551   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.168750   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.168769   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168779   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168794   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.168802   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.168915   27242 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0703 23:05:41.168924   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.168933   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.168937   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.179174   27242 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0703 23:05:41.179856   27242 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0703 23:05:41.179872   27242 round_trippers.go:469] Request Headers:
	I0703 23:05:41.179901   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:05:41.179907   27242 round_trippers.go:473]     Content-Type: application/json
	I0703 23:05:41.179911   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:05:41.184900   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:05:41.185231   27242 main.go:141] libmachine: Making call to close driver server
	I0703 23:05:41.185253   27242 main.go:141] libmachine: (ha-856893) Calling .Close
	I0703 23:05:41.185557   27242 main.go:141] libmachine: Successfully made call to close driver server
	I0703 23:05:41.185577   27242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 23:05:41.185585   27242 main.go:141] libmachine: (ha-856893) DBG | Closing plugin on server side
	I0703 23:05:41.187828   27242 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0703 23:05:41.188847   27242 addons.go:510] duration metric: took 1.03063116s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0703 23:05:41.188886   27242 start.go:245] waiting for cluster config update ...
	I0703 23:05:41.188901   27242 start.go:254] writing updated cluster config ...
	I0703 23:05:41.190310   27242 out.go:177] 
	I0703 23:05:41.191599   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:05:41.191664   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.193011   27242 out.go:177] * Starting "ha-856893-m02" control-plane node in "ha-856893" cluster
	I0703 23:05:41.194050   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:05:41.194075   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:05:41.194179   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:05:41.194194   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:05:41.194269   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:05:41.194484   27242 start.go:360] acquireMachinesLock for ha-856893-m02: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:05:41.194535   27242 start.go:364] duration metric: took 29.239µs to acquireMachinesLock for "ha-856893-m02"
	I0703 23:05:41.194552   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:05:41.194614   27242 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0703 23:05:41.195906   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:05:41.195988   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:05:41.196019   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:05:41.210406   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0703 23:05:41.210841   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:05:41.211288   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:05:41.211309   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:05:41.211576   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:05:41.211756   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:05:41.211861   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:05:41.212057   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:05:41.212087   27242 client.go:168] LocalClient.Create starting
	I0703 23:05:41.212116   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:05:41.212148   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212165   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212230   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:05:41.212264   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:05:41.212288   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:05:41.212315   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:05:41.212327   27242 main.go:141] libmachine: (ha-856893-m02) Calling .PreCreateCheck
	I0703 23:05:41.212497   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:05:41.212940   27242 main.go:141] libmachine: Creating machine...
	I0703 23:05:41.212958   27242 main.go:141] libmachine: (ha-856893-m02) Calling .Create
	I0703 23:05:41.213096   27242 main.go:141] libmachine: (ha-856893-m02) Creating KVM machine...
	I0703 23:05:41.214567   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing default KVM network
	I0703 23:05:41.214736   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found existing private KVM network mk-ha-856893
	I0703 23:05:41.214862   27242 main.go:141] libmachine: (ha-856893-m02) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.214887   27242 main.go:141] libmachine: (ha-856893-m02) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:05:41.214947   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.214842   27608 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.215063   27242 main.go:141] libmachine: (ha-856893-m02) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:05:41.436860   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.436749   27608 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa...
	I0703 23:05:41.523744   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523612   27608 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk...
	I0703 23:05:41.523793   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing magic tar header
	I0703 23:05:41.523828   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Writing SSH key tar header
	I0703 23:05:41.523850   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:41.523749   27608 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 ...
	I0703 23:05:41.523869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02
	I0703 23:05:41.523955   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:05:41.523978   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02 (perms=drwx------)
	I0703 23:05:41.523990   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:05:41.524009   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:05:41.524021   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:05:41.524031   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:05:41.524041   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Checking permissions on dir: /home
	I0703 23:05:41.524065   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:05:41.524084   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:05:41.524093   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Skipping /home - not owner
	I0703 23:05:41.524132   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:05:41.524151   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:05:41.524184   27242 main.go:141] libmachine: (ha-856893-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:05:41.524203   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:41.525176   27242 main.go:141] libmachine: (ha-856893-m02) define libvirt domain using xml: 
	I0703 23:05:41.525194   27242 main.go:141] libmachine: (ha-856893-m02) <domain type='kvm'>
	I0703 23:05:41.525204   27242 main.go:141] libmachine: (ha-856893-m02)   <name>ha-856893-m02</name>
	I0703 23:05:41.525211   27242 main.go:141] libmachine: (ha-856893-m02)   <memory unit='MiB'>2200</memory>
	I0703 23:05:41.525218   27242 main.go:141] libmachine: (ha-856893-m02)   <vcpu>2</vcpu>
	I0703 23:05:41.525225   27242 main.go:141] libmachine: (ha-856893-m02)   <features>
	I0703 23:05:41.525234   27242 main.go:141] libmachine: (ha-856893-m02)     <acpi/>
	I0703 23:05:41.525250   27242 main.go:141] libmachine: (ha-856893-m02)     <apic/>
	I0703 23:05:41.525262   27242 main.go:141] libmachine: (ha-856893-m02)     <pae/>
	I0703 23:05:41.525274   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525286   27242 main.go:141] libmachine: (ha-856893-m02)   </features>
	I0703 23:05:41.525297   27242 main.go:141] libmachine: (ha-856893-m02)   <cpu mode='host-passthrough'>
	I0703 23:05:41.525308   27242 main.go:141] libmachine: (ha-856893-m02)   
	I0703 23:05:41.525316   27242 main.go:141] libmachine: (ha-856893-m02)   </cpu>
	I0703 23:05:41.525325   27242 main.go:141] libmachine: (ha-856893-m02)   <os>
	I0703 23:05:41.525336   27242 main.go:141] libmachine: (ha-856893-m02)     <type>hvm</type>
	I0703 23:05:41.525356   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='cdrom'/>
	I0703 23:05:41.525376   27242 main.go:141] libmachine: (ha-856893-m02)     <boot dev='hd'/>
	I0703 23:05:41.525387   27242 main.go:141] libmachine: (ha-856893-m02)     <bootmenu enable='no'/>
	I0703 23:05:41.525398   27242 main.go:141] libmachine: (ha-856893-m02)   </os>
	I0703 23:05:41.525409   27242 main.go:141] libmachine: (ha-856893-m02)   <devices>
	I0703 23:05:41.525425   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='cdrom'>
	I0703 23:05:41.525442   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/boot2docker.iso'/>
	I0703 23:05:41.525453   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hdc' bus='scsi'/>
	I0703 23:05:41.525461   27242 main.go:141] libmachine: (ha-856893-m02)       <readonly/>
	I0703 23:05:41.525468   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525474   27242 main.go:141] libmachine: (ha-856893-m02)     <disk type='file' device='disk'>
	I0703 23:05:41.525481   27242 main.go:141] libmachine: (ha-856893-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:05:41.525510   27242 main.go:141] libmachine: (ha-856893-m02)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/ha-856893-m02.rawdisk'/>
	I0703 23:05:41.525531   27242 main.go:141] libmachine: (ha-856893-m02)       <target dev='hda' bus='virtio'/>
	I0703 23:05:41.525547   27242 main.go:141] libmachine: (ha-856893-m02)     </disk>
	I0703 23:05:41.525564   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525578   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='mk-ha-856893'/>
	I0703 23:05:41.525589   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525602   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525613   27242 main.go:141] libmachine: (ha-856893-m02)     <interface type='network'>
	I0703 23:05:41.525639   27242 main.go:141] libmachine: (ha-856893-m02)       <source network='default'/>
	I0703 23:05:41.525649   27242 main.go:141] libmachine: (ha-856893-m02)       <model type='virtio'/>
	I0703 23:05:41.525661   27242 main.go:141] libmachine: (ha-856893-m02)     </interface>
	I0703 23:05:41.525671   27242 main.go:141] libmachine: (ha-856893-m02)     <serial type='pty'>
	I0703 23:05:41.525684   27242 main.go:141] libmachine: (ha-856893-m02)       <target port='0'/>
	I0703 23:05:41.525699   27242 main.go:141] libmachine: (ha-856893-m02)     </serial>
	I0703 23:05:41.525711   27242 main.go:141] libmachine: (ha-856893-m02)     <console type='pty'>
	I0703 23:05:41.525723   27242 main.go:141] libmachine: (ha-856893-m02)       <target type='serial' port='0'/>
	I0703 23:05:41.525733   27242 main.go:141] libmachine: (ha-856893-m02)     </console>
	I0703 23:05:41.525743   27242 main.go:141] libmachine: (ha-856893-m02)     <rng model='virtio'>
	I0703 23:05:41.525757   27242 main.go:141] libmachine: (ha-856893-m02)       <backend model='random'>/dev/random</backend>
	I0703 23:05:41.525778   27242 main.go:141] libmachine: (ha-856893-m02)     </rng>
	I0703 23:05:41.525789   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525797   27242 main.go:141] libmachine: (ha-856893-m02)     
	I0703 23:05:41.525806   27242 main.go:141] libmachine: (ha-856893-m02)   </devices>
	I0703 23:05:41.525815   27242 main.go:141] libmachine: (ha-856893-m02) </domain>
	I0703 23:05:41.525826   27242 main.go:141] libmachine: (ha-856893-m02) 
	I0703 23:05:41.532564   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:87:47:a5 in network default
	I0703 23:05:41.533109   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring networks are active...
	I0703 23:05:41.533130   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:41.533788   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network default is active
	I0703 23:05:41.534054   27242 main.go:141] libmachine: (ha-856893-m02) Ensuring network mk-ha-856893 is active
	I0703 23:05:41.534401   27242 main.go:141] libmachine: (ha-856893-m02) Getting domain xml...
	I0703 23:05:41.535101   27242 main.go:141] libmachine: (ha-856893-m02) Creating domain...
	I0703 23:05:42.768845   27242 main.go:141] libmachine: (ha-856893-m02) Waiting to get IP...
	I0703 23:05:42.769571   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.769959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.770003   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.769952   27608 retry.go:31] will retry after 219.708119ms: waiting for machine to come up
	I0703 23:05:42.991437   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:42.991986   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:42.992017   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:42.991932   27608 retry.go:31] will retry after 272.434306ms: waiting for machine to come up
	I0703 23:05:43.266445   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.266888   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.266916   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.266846   27608 retry.go:31] will retry after 435.377928ms: waiting for machine to come up
	I0703 23:05:43.703359   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:43.703810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:43.703838   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:43.703758   27608 retry.go:31] will retry after 451.040954ms: waiting for machine to come up
	I0703 23:05:44.156129   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.156655   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.156683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.156609   27608 retry.go:31] will retry after 760.280274ms: waiting for machine to come up
	I0703 23:05:44.918103   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:44.918554   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:44.918579   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:44.918505   27608 retry.go:31] will retry after 698.518733ms: waiting for machine to come up
	I0703 23:05:45.618162   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:45.618587   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:45.618614   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:45.618539   27608 retry.go:31] will retry after 993.528309ms: waiting for machine to come up
	I0703 23:05:46.614158   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:46.614719   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:46.614745   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:46.614678   27608 retry.go:31] will retry after 1.327932051s: waiting for machine to come up
	I0703 23:05:47.944596   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:47.945018   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:47.945045   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:47.944978   27608 retry.go:31] will retry after 1.683564403s: waiting for machine to come up
	I0703 23:05:49.630786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:49.631090   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:49.631116   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:49.631040   27608 retry.go:31] will retry after 1.84507818s: waiting for machine to come up
	I0703 23:05:51.477398   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:51.477872   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:51.477893   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:51.477839   27608 retry.go:31] will retry after 1.786726505s: waiting for machine to come up
	I0703 23:05:53.266749   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:53.267104   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:53.267133   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:53.267086   27608 retry.go:31] will retry after 3.479688612s: waiting for machine to come up
	I0703 23:05:56.748688   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:05:56.749070   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:05:56.749097   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:05:56.749047   27608 retry.go:31] will retry after 3.495058467s: waiting for machine to come up
	I0703 23:06:00.248588   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:00.249038   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find current IP address of domain ha-856893-m02 in network mk-ha-856893
	I0703 23:06:00.249062   27242 main.go:141] libmachine: (ha-856893-m02) DBG | I0703 23:06:00.248993   27608 retry.go:31] will retry after 4.710071103s: waiting for machine to come up
	I0703 23:06:04.963165   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963558   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has current primary IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:04.963579   27242 main.go:141] libmachine: (ha-856893-m02) Found IP for machine: 192.168.39.157
	I0703 23:06:04.963599   27242 main.go:141] libmachine: (ha-856893-m02) Reserving static IP address...
	I0703 23:06:04.963959   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "ha-856893-m02", mac: "52:54:00:88:5c:3d", ip: "192.168.39.157"} in network mk-ha-856893
	I0703 23:06:05.043210   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:05.043242   27242 main.go:141] libmachine: (ha-856893-m02) Reserved static IP address: 192.168.39.157
	I0703 23:06:05.043256   27242 main.go:141] libmachine: (ha-856893-m02) Waiting for SSH to be available...
	I0703 23:06:05.045810   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:05.046139   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893
	I0703 23:06:05.046163   27242 main.go:141] libmachine: (ha-856893-m02) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:88:5c:3d
	I0703 23:06:05.046324   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:05.046345   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:05.046421   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:05.046443   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:05.046462   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:05.050096   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:06:05.050114   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:06:05.050124   27242 main.go:141] libmachine: (ha-856893-m02) DBG | command : exit 0
	I0703 23:06:05.050131   27242 main.go:141] libmachine: (ha-856893-m02) DBG | err     : exit status 255
	I0703 23:06:05.050140   27242 main.go:141] libmachine: (ha-856893-m02) DBG | output  : 
	I0703 23:06:08.051925   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Getting to WaitForSSH function...
	I0703 23:06:08.055727   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056153   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.056179   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.056333   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH client type: external
	I0703 23:06:08.056344   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa (-rw-------)
	I0703 23:06:08.056368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:06:08.056380   27242 main.go:141] libmachine: (ha-856893-m02) DBG | About to run SSH command:
	I0703 23:06:08.056395   27242 main.go:141] libmachine: (ha-856893-m02) DBG | exit 0
	I0703 23:06:08.180086   27242 main.go:141] libmachine: (ha-856893-m02) DBG | SSH cmd err, output: <nil>: 
	I0703 23:06:08.180375   27242 main.go:141] libmachine: (ha-856893-m02) KVM machine creation complete!
	I0703 23:06:08.180680   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:08.181273   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:08.181738   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:06:08.181772   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetState
	I0703 23:06:08.183073   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:06:08.183084   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:06:08.183090   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:06:08.183097   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.185510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.185869   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.185885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.186103   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.186258   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186404   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.186562   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.186737   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.186953   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.186971   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:06:08.287312   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.287335   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:06:08.287345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.289859   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290230   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.290255   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.290391   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.290601   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290826   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.290992   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.291192   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.291400   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.291413   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:06:08.397296   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:06:08.397352   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:06:08.397358   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:06:08.397365   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397596   27242 buildroot.go:166] provisioning hostname "ha-856893-m02"
	I0703 23:06:08.397609   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.397805   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.400446   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.400800   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.400824   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.401028   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.401213   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401394   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.401516   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.401657   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.401840   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.401855   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m02 && echo "ha-856893-m02" | sudo tee /etc/hostname
	I0703 23:06:08.520319   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m02
	
	I0703 23:06:08.520345   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.522961   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523341   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.523368   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.523587   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.523781   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.523977   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.524116   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.524312   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.524466   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.524481   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:06:08.633867   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:06:08.633900   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:06:08.633921   27242 buildroot.go:174] setting up certificates
	I0703 23:06:08.633932   27242 provision.go:84] configureAuth start
	I0703 23:06:08.633945   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetMachineName
	I0703 23:06:08.634242   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:08.637222   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637606   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.637629   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.637798   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.640510   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.640861   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.640885   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.641040   27242 provision.go:143] copyHostCerts
	I0703 23:06:08.641075   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641110   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:06:08.641119   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:06:08.641188   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:06:08.641264   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641289   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:06:08.641295   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:06:08.641319   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:06:08.641363   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641379   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:06:08.641385   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:06:08.641406   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:06:08.641461   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m02 san=[127.0.0.1 192.168.39.157 ha-856893-m02 localhost minikube]
	I0703 23:06:08.796742   27242 provision.go:177] copyRemoteCerts
	I0703 23:06:08.796795   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:06:08.796849   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.799514   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.799786   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.799814   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.800039   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.800233   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.800418   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.800539   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:08.882648   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:06:08.882725   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:06:08.909249   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:06:08.909332   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:06:08.935044   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:06:08.935123   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:06:08.961479   27242 provision.go:87] duration metric: took 327.532705ms to configureAuth
	I0703 23:06:08.961528   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:06:08.961731   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:08.961796   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:08.964260   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964562   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:08.964599   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:08.964761   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:08.964962   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965132   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:08.965255   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:08.965414   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:08.965748   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:08.965776   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:06:09.252115   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:06:09.252149   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:06:09.252160   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetURL
	I0703 23:06:09.253575   27242 main.go:141] libmachine: (ha-856893-m02) DBG | Using libvirt version 6000000
	I0703 23:06:09.255956   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256313   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.256339   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.256506   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:06:09.256517   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:06:09.256522   27242 client.go:171] duration metric: took 28.044426812s to LocalClient.Create
	I0703 23:06:09.256545   27242 start.go:167] duration metric: took 28.044488456s to libmachine.API.Create "ha-856893"
	I0703 23:06:09.256558   27242 start.go:293] postStartSetup for "ha-856893-m02" (driver="kvm2")
	I0703 23:06:09.256571   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:06:09.256597   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.256867   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:06:09.256898   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.258897   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259196   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.259239   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.259356   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.259535   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.259720   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.259905   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.343496   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:06:09.347947   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:06:09.347969   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:06:09.348034   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:06:09.348116   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:06:09.348127   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:06:09.348228   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:06:09.358974   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:09.386575   27242 start.go:296] duration metric: took 129.995195ms for postStartSetup
	I0703 23:06:09.386638   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetConfigRaw
	I0703 23:06:09.387232   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.389784   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390091   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.390121   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.390365   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:06:09.390569   27242 start.go:128] duration metric: took 28.195940074s to createHost
	I0703 23:06:09.390602   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.392949   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393304   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.393332   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.393472   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.393668   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393812   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.393960   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.394148   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:06:09.394332   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0703 23:06:09.394343   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:06:09.496753   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047969.477411010
	
	I0703 23:06:09.496773   27242 fix.go:216] guest clock: 1720047969.477411010
	I0703 23:06:09.496780   27242 fix.go:229] Guest: 2024-07-03 23:06:09.47741101 +0000 UTC Remote: 2024-07-03 23:06:09.39059124 +0000 UTC m=+80.120847171 (delta=86.81977ms)
	I0703 23:06:09.496794   27242 fix.go:200] guest clock delta is within tolerance: 86.81977ms
	I0703 23:06:09.496803   27242 start.go:83] releasing machines lock for "ha-856893-m02", held for 28.302255725s
	I0703 23:06:09.496818   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.497106   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:09.499993   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.500377   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.500405   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.502889   27242 out.go:177] * Found network options:
	I0703 23:06:09.504348   27242 out.go:177]   - NO_PROXY=192.168.39.172
	W0703 23:06:09.505618   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.505646   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506197   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506364   27242 main.go:141] libmachine: (ha-856893-m02) Calling .DriverName
	I0703 23:06:09.506442   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:06:09.506485   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	W0703 23:06:09.506549   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:06:09.506631   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:06:09.506648   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHHostname
	I0703 23:06:09.509646   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.509683   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510044   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510071   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510094   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:09.510105   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:09.510284   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510625   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHPort
	I0703 23:06:09.510701   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510771   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHKeyPath
	I0703 23:06:09.510887   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.510891   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetSSHUsername
	I0703 23:06:09.511011   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.511022   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m02/id_rsa Username:docker}
	I0703 23:06:09.748974   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:06:09.754928   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:06:09.754991   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:06:09.773195   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:06:09.773218   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:06:09.773284   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:06:09.791699   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:06:09.808279   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:06:09.808345   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:06:09.824370   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:06:09.839742   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:06:09.976077   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:06:10.157590   27242 docker.go:233] disabling docker service ...
	I0703 23:06:10.157655   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:06:10.173171   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:06:10.187323   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:06:10.317842   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:06:10.448801   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:06:10.464012   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:06:10.484552   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:06:10.484626   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.495842   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:06:10.495962   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.507047   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.518157   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.529601   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:06:10.541072   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.552143   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.570995   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:06:10.582051   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:06:10.592526   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:06:10.592586   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:06:10.607423   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:06:10.617890   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:10.738828   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:06:10.888735   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:06:10.888797   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:06:10.894395   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:06:10.894461   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:06:10.898671   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:06:10.940941   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:06:10.941015   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:10.971313   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:06:11.002905   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:06:11.004738   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:06:11.006065   27242 main.go:141] libmachine: (ha-856893-m02) Calling .GetIP
	I0703 23:06:11.008543   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.008879   27242 main.go:141] libmachine: (ha-856893-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:5c:3d", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:55 +0000 UTC Type:0 Mac:52:54:00:88:5c:3d Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-856893-m02 Clientid:01:52:54:00:88:5c:3d}
	I0703 23:06:11.008909   27242 main.go:141] libmachine: (ha-856893-m02) DBG | domain ha-856893-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:88:5c:3d in network mk-ha-856893
	I0703 23:06:11.009050   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:06:11.013641   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:11.027727   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:06:11.027975   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:11.028270   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.028323   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.044531   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0703 23:06:11.045043   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.045558   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.045579   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.045862   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.046039   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:06:11.047494   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:11.047885   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:11.047930   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:11.062704   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0703 23:06:11.063093   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:11.063555   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:11.063572   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:11.063895   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:11.064071   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:11.064261   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.157
	I0703 23:06:11.064278   27242 certs.go:194] generating shared ca certs ...
	I0703 23:06:11.064297   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.064442   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:06:11.064488   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:06:11.064502   27242 certs.go:256] generating profile certs ...
	I0703 23:06:11.064611   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:06:11.064645   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b
	I0703 23:06:11.064664   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.254]
	I0703 23:06:11.125542   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b ...
	I0703 23:06:11.125570   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b: {Name:mk6b6ba77f2115f78526ecec09853230dd3e53c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125732   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b ...
	I0703 23:06:11.125745   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b: {Name:mkf063a91f34b3b9346f6b304c5ea881bd2f5324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:06:11.125812   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:06:11.125946   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.a492e42b -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:06:11.126068   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:06:11.126083   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:06:11.126094   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:06:11.126107   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:06:11.126119   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:06:11.126131   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:06:11.126143   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:06:11.126156   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:06:11.126174   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:06:11.126219   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:06:11.126254   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:06:11.126262   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:06:11.126284   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:06:11.126304   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:06:11.126325   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:06:11.126365   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:06:11.126389   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.126403   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.126414   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.126446   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:11.129130   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129526   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:11.129547   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:11.129763   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:11.129991   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:11.130155   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:11.130308   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:11.208220   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:06:11.214445   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:06:11.227338   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:06:11.232205   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:06:11.244770   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:06:11.249486   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:06:11.263595   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:06:11.268404   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:06:11.280311   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:06:11.284783   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:06:11.296982   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:06:11.301718   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:06:11.316760   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:06:11.344751   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:06:11.372405   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:06:11.399264   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:06:11.425913   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0703 23:06:11.453127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:06:11.480939   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:06:11.507887   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:06:11.536077   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:06:11.562896   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:06:11.589792   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:06:11.619857   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:06:11.638186   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:06:11.658574   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:06:11.681046   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:06:11.699440   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:06:11.717487   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:06:11.735967   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:06:11.756625   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:06:11.763174   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:06:11.777088   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782196   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.782262   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:06:11.789061   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:06:11.802412   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:06:11.815542   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820664   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.820720   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:06:11.827137   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:06:11.839737   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:06:11.852655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857826   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.857882   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:06:11.863859   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:06:11.875860   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:06:11.880842   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:06:11.880910   27242 kubeadm.go:928] updating node {m02 192.168.39.157 8443 v1.30.2 crio true true} ...
	I0703 23:06:11.880993   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:06:11.881017   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:06:11.881059   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:06:11.901217   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:06:11.901292   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:06:11.901361   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.912603   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:06:11.912662   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:06:11.923700   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0703 23:06:11.923725   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:06:11.923738   27242 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0703 23:06:11.923750   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.923823   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:06:11.930352   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:06:11.930395   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:06:18.577968   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.578050   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:06:18.584084   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:06:18.584127   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:06:24.489268   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:06:24.506069   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.506160   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:06:24.510885   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:06:24.510927   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:06:24.948564   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:06:24.961462   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:06:24.980150   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:06:24.998455   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:06:25.016528   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:06:25.020797   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:06:25.034283   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:25.172768   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:25.191293   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:06:25.191893   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:06:25.191940   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:06:25.207801   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0703 23:06:25.208291   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:06:25.208871   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:06:25.208895   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:06:25.209219   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:06:25.209391   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:06:25.209509   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:06:25.209636   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:06:25.209656   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:06:25.213110   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213539   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:06:25.213572   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:06:25.213846   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:06:25.214062   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:06:25.214220   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:06:25.214382   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:06:25.391200   27242 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:25.391247   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443"
	I0703 23:06:47.544091   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfeyib.89k5hf5p18zb6r7t --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m02 --control-plane --apiserver-advertise-address=192.168.39.157 --apiserver-bind-port=8443": (22.152804646s)
	I0703 23:06:47.544127   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:06:48.068945   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m02 minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:06:48.232893   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:06:48.350705   27242 start.go:318] duration metric: took 23.141192018s to joinCluster
	I0703 23:06:48.350794   27242 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:06:48.351091   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:06:48.352341   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:06:48.353641   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:06:48.588280   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:06:48.608838   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:06:48.609120   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:06:48.609198   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:06:48.609481   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:48.609599   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:48.609611   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:48.609620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:48.609626   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:48.622593   27242 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0703 23:06:49.109815   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.109841   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.109851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.109860   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.119178   27242 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0703 23:06:49.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:49.609864   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:49.609873   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:49.609877   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:49.613800   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.110707   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.110728   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.110736   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.110740   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.114001   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.609830   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:50.609883   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:50.609896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:50.609903   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:50.613093   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:50.613625   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:51.109898   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.109927   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.109937   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.109943   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.113216   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:51.609829   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:51.609854   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:51.609862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:51.609867   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:51.613350   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.110567   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.110587   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.110594   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.110598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.114275   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:52.610448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:52.610473   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:52.610484   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:52.610490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:52.613455   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:52.614165   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:53.110342   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.110372   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.110384   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.110390   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.113932   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:53.610596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:53.610615   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:53.610624   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:53.610628   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:53.613938   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.110534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.110616   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.110634   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.110642   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.114018   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.610334   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:54.610351   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:54.610358   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:54.610362   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:54.613905   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:54.614483   27242 node_ready.go:53] node "ha-856893-m02" has status "Ready":"False"
	I0703 23:06:55.109792   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.109813   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.109821   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.109824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.113250   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.609747   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.609767   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.609777   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.609783   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.612716   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.613412   27242 node_ready.go:49] node "ha-856893-m02" has status "Ready":"True"
	I0703 23:06:55.613435   27242 node_ready.go:38] duration metric: took 7.003919204s for node "ha-856893-m02" to be "Ready" ...
	I0703 23:06:55.613447   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:06:55.613534   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:06:55.613547   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.613557   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.613562   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.618175   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.623904   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.623988   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:06:55.623996   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.624003   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.624009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.627442   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:55.628363   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.628382   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.628394   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.628402   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631180   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.631700   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.631719   27242 pod_ready.go:81] duration metric: took 7.786492ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631728   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.631796   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:06:55.631806   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.631815   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.631820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.635897   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:55.636658   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.636678   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.636687   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.636692   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.639691   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.640704   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.640723   27242 pod_ready.go:81] duration metric: took 8.987769ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640734   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.640789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:06:55.640797   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.640803   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.640807   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.643359   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.643907   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:55.643924   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.643932   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.643936   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.646899   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.647968   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:55.647991   27242 pod_ready.go:81] duration metric: took 7.249953ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648004   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:55.648071   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:55.648085   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.648095   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.648101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.650814   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:55.651459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:55.651474   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:55.651486   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:55.651490   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:55.653793   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:56.148491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.148513   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.148521   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.148525   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.152385   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.153042   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.153060   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.153067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.153071   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.157627   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:56.649122   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:56.649140   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.649146   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.649149   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.652526   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:56.653306   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:56.653320   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:56.653327   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:56.653331   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:56.655979   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.149064   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.149092   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.149101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.149106   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.152417   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.153222   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.153241   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.153249   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.153254   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.156135   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.649140   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:57.649181   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.649192   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.649198   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.652477   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:57.653084   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:57.653100   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:57.653106   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:57.653111   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:57.655555   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:57.656210   27242 pod_ready.go:102] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:06:58.148254   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.148274   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.148282   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.148286   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.152590   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:58.153465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.153480   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.153488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.153495   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.156588   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:58.648596   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:58.648622   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.648633   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.648639   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.651552   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:58.652309   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:58.652326   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:58.652333   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:58.652338   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:58.654822   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.148789   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.148811   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.148820   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.148824   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.152583   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.153376   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.153394   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.153401   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.153406   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.156325   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.648919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:06:59.648945   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.648956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.648963   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.652540   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:06:59.653454   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.653476   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.653487   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.653508   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.658095   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:06:59.658913   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.658934   27242 pod_ready.go:81] duration metric: took 4.010920952s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.658949   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.659006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:06:59.659016   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.659027   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.659036   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.661826   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.662571   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:06:59.662588   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.662595   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.662598   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.665446   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.665948   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:06:59.665968   27242 pod_ready.go:81] duration metric: took 7.012702ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.665978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:06:59.666039   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:06:59.666046   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.666053   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.666056   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.668927   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:06:59.669628   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:06:59.669644   27242 round_trippers.go:469] Request Headers:
	I0703 23:06:59.669651   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:06:59.669656   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:06:59.672172   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.167115   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.167140   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.167150   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.167156   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.170205   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:00.170996   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.171017   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.171029   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.171039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.173937   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:00.666560   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:00.666581   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.666591   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.666598   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.685399   27242 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0703 23:07:00.686013   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:00.686031   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:00.686039   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:00.686044   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:00.694695   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:07:01.166491   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.166515   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.166524   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.166529   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.170037   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.170694   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.170710   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.170717   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.170722   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.173354   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.666570   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:01.666592   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.666600   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.666603   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670182   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:01.670960   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:01.670972   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:01.670980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:01.670984   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:01.673678   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:01.674253   27242 pod_ready.go:102] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:02.166192   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:07:02.166222   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.166234   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.166241   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.169265   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.170194   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.170209   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.170217   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.170220   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.173318   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.173900   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.173921   27242 pod_ready.go:81] duration metric: took 2.507930848s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173934   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.173990   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:07:02.173999   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.174007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.174011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.177819   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.178515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:02.178531   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.178539   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.178542   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.181392   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:02.181852   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:02.181870   27242 pod_ready.go:81] duration metric: took 7.929988ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.181879   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:02.210176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.210204   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.210225   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.216238   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:07:02.410326   27242 request.go:629] Waited for 193.332004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410396   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.410402   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.410409   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.410414   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.414343   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.682063   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:02.682086   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.682094   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.682099   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.685969   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:02.809842   27242 request.go:629] Waited for 123.198326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809919   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:02.809924   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:02.809931   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:02.809935   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:02.813615   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.182561   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.182583   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.182591   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.182595   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.185818   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:03.210189   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.210213   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.210226   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.210231   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.212835   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:03.682870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:03.682893   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.682904   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.682913   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.687007   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:03.687982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:03.688000   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:03.688007   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:03.688010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:03.690789   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.182980   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.183005   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.183012   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.183015   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.187120   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:04.187803   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.187820   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.187827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.187832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.190585   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.191265   27242 pod_ready.go:102] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"False"
	I0703 23:07:04.682068   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:07:04.682093   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.682101   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.682105   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.685315   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.686021   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:04.686042   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.686051   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.686060   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.689699   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.690333   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.690354   27242 pod_ready.go:81] duration metric: took 2.508468638s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690363   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.690415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:07:04.690423   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.690429   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.690433   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.693270   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:04.810198   27242 request.go:629] Waited for 116.3003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810277   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:04.810287   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:04.810297   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:04.810306   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:04.813548   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:04.814288   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:04.814310   27242 pod_ready.go:81] duration metric: took 123.940721ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:04.814321   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.009731   27242 request.go:629] Waited for 195.334691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009801   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:07:05.009812   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.009823   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.009831   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.013135   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.209785   27242 request.go:629] Waited for 196.045433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209863   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:05.209876   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.209888   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.209896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.213369   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.213938   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.213964   27242 pod_ready.go:81] duration metric: took 399.631019ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.213978   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.410292   27242 request.go:629] Waited for 196.24208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:07:05.410382   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.410392   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.410398   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.413436   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:05.610477   27242 request.go:629] Waited for 196.362666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610529   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:07:05.610542   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.610550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.610554   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.613467   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:05.613972   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:05.613988   27242 pod_ready.go:81] duration metric: took 399.999359ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.613996   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:05.810106   27242 request.go:629] Waited for 196.052695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:07:05.810185   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:05.810209   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:05.810232   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:05.813771   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.009910   27242 request.go:629] Waited for 195.274604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009982   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:07:06.009992   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.010002   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.010010   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.013701   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.014446   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:07:06.014463   27242 pod_ready.go:81] duration metric: took 400.459709ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:07:06.014476   27242 pod_ready.go:38] duration metric: took 10.401015204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:07:06.014493   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:07:06.014549   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:07:06.030327   27242 api_server.go:72] duration metric: took 17.679497097s to wait for apiserver process to appear ...
	I0703 23:07:06.030347   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:07:06.030365   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:07:06.036783   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:07:06.036854   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:07:06.036859   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.036867   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.036872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.037690   27242 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0703 23:07:06.037801   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:07:06.037818   27242 api_server.go:131] duration metric: took 7.465872ms to wait for apiserver health ...
	I0703 23:07:06.037825   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:07:06.209877   27242 request.go:629] Waited for 171.974222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210016   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.210032   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.210040   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.210046   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.214918   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.219567   27242 system_pods.go:59] 17 kube-system pods found
	I0703 23:07:06.219598   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.219602   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.219607   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.219610   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.219614   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.219617   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.219620   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.219623   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.219628   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.219637   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.219643   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.219648   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.219658   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.219664   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.219669   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.219676   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.219682   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.219693   27242 system_pods.go:74] duration metric: took 181.861646ms to wait for pod list to return data ...
	I0703 23:07:06.219700   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:07:06.410182   27242 request.go:629] Waited for 190.397554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410264   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:07:06.410274   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.410285   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.410289   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.413289   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:07:06.413480   27242 default_sa.go:45] found service account: "default"
	I0703 23:07:06.413495   27242 default_sa.go:55] duration metric: took 193.786983ms for default service account to be created ...
	I0703 23:07:06.413503   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:07:06.609837   27242 request.go:629] Waited for 196.27709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609895   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:07:06.609901   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.609908   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.609912   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.614868   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:07:06.619343   27242 system_pods.go:86] 17 kube-system pods found
	I0703 23:07:06.619371   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:07:06.619376   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:07:06.619380   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:07:06.619384   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:07:06.619388   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:07:06.619392   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:07:06.619395   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:07:06.619400   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:07:06.619404   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:07:06.619408   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:07:06.619412   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:07:06.619416   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:07:06.619420   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:07:06.619424   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:07:06.619428   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:07:06.619433   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:07:06.619437   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:07:06.619444   27242 system_pods.go:126] duration metric: took 205.937561ms to wait for k8s-apps to be running ...
	I0703 23:07:06.619453   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:07:06.619502   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:06.636194   27242 system_svc.go:56] duration metric: took 16.729677ms WaitForService to wait for kubelet
	I0703 23:07:06.636223   27242 kubeadm.go:576] duration metric: took 18.285397296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:07:06.636240   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:07:06.810678   27242 request.go:629] Waited for 174.367698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810751   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:07:06.810759   27242 round_trippers.go:469] Request Headers:
	I0703 23:07:06.810766   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:07:06.810773   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:07:06.814396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:07:06.815321   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815347   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815358   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:07:06.815361   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:07:06.815365   27242 node_conditions.go:105] duration metric: took 179.120869ms to run NodePressure ...
	I0703 23:07:06.815375   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:07:06.815405   27242 start.go:254] writing updated cluster config ...
	I0703 23:07:06.817467   27242 out.go:177] 
	I0703 23:07:06.818836   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:06.818926   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.820500   27242 out.go:177] * Starting "ha-856893-m03" control-plane node in "ha-856893" cluster
	I0703 23:07:06.821716   27242 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:07:06.821732   27242 cache.go:56] Caching tarball of preloaded images
	I0703 23:07:06.821877   27242 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:07:06.821891   27242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:07:06.821981   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:06.822155   27242 start.go:360] acquireMachinesLock for ha-856893-m03: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:07:06.822195   27242 start.go:364] duration metric: took 22.144µs to acquireMachinesLock for "ha-856893-m03"
	I0703 23:07:06.822209   27242 start.go:93] Provisioning new machine with config: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:06.822295   27242 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0703 23:07:06.823658   27242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:07:06.823727   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:06.823756   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:06.838452   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0703 23:07:06.838936   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:06.839363   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:06.839383   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:06.839736   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:06.839918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:06.840069   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:06.840226   27242 start.go:159] libmachine.API.Create for "ha-856893" (driver="kvm2")
	I0703 23:07:06.840254   27242 client.go:168] LocalClient.Create starting
	I0703 23:07:06.840290   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:07:06.840327   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840346   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840410   27242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:07:06.840432   27242 main.go:141] libmachine: Decoding PEM data...
	I0703 23:07:06.840449   27242 main.go:141] libmachine: Parsing certificate...
	I0703 23:07:06.840474   27242 main.go:141] libmachine: Running pre-create checks...
	I0703 23:07:06.840485   27242 main.go:141] libmachine: (ha-856893-m03) Calling .PreCreateCheck
	I0703 23:07:06.840643   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:06.841024   27242 main.go:141] libmachine: Creating machine...
	I0703 23:07:06.841038   27242 main.go:141] libmachine: (ha-856893-m03) Calling .Create
	I0703 23:07:06.841188   27242 main.go:141] libmachine: (ha-856893-m03) Creating KVM machine...
	I0703 23:07:06.842688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing default KVM network
	I0703 23:07:06.842868   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found existing private KVM network mk-ha-856893
	I0703 23:07:06.843022   27242 main.go:141] libmachine: (ha-856893-m03) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:06.843049   27242 main.go:141] libmachine: (ha-856893-m03) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:07:06.843102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:06.842997   28071 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:06.843189   27242 main.go:141] libmachine: (ha-856893-m03) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:07:07.067762   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.067633   28071 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa...
	I0703 23:07:07.216110   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.215993   28071 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk...
	I0703 23:07:07.216138   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing magic tar header
	I0703 23:07:07.216158   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Writing SSH key tar header
	I0703 23:07:07.216172   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:07.216113   28071 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 ...
	I0703 23:07:07.216256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03
	I0703 23:07:07.216285   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03 (perms=drwx------)
	I0703 23:07:07.216298   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:07:07.216313   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:07:07.216337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:07:07.216352   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:07:07.216366   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:07:07.216383   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:07:07.216405   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:07:07.216424   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:07:07.216451   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Checking permissions on dir: /home
	I0703 23:07:07.216463   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Skipping /home - not owner
	I0703 23:07:07.216477   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:07:07.216497   27242 main.go:141] libmachine: (ha-856893-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:07:07.216508   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:07.217338   27242 main.go:141] libmachine: (ha-856893-m03) define libvirt domain using xml: 
	I0703 23:07:07.217359   27242 main.go:141] libmachine: (ha-856893-m03) <domain type='kvm'>
	I0703 23:07:07.217366   27242 main.go:141] libmachine: (ha-856893-m03)   <name>ha-856893-m03</name>
	I0703 23:07:07.217375   27242 main.go:141] libmachine: (ha-856893-m03)   <memory unit='MiB'>2200</memory>
	I0703 23:07:07.217404   27242 main.go:141] libmachine: (ha-856893-m03)   <vcpu>2</vcpu>
	I0703 23:07:07.217426   27242 main.go:141] libmachine: (ha-856893-m03)   <features>
	I0703 23:07:07.217439   27242 main.go:141] libmachine: (ha-856893-m03)     <acpi/>
	I0703 23:07:07.217450   27242 main.go:141] libmachine: (ha-856893-m03)     <apic/>
	I0703 23:07:07.217460   27242 main.go:141] libmachine: (ha-856893-m03)     <pae/>
	I0703 23:07:07.217471   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217482   27242 main.go:141] libmachine: (ha-856893-m03)   </features>
	I0703 23:07:07.217492   27242 main.go:141] libmachine: (ha-856893-m03)   <cpu mode='host-passthrough'>
	I0703 23:07:07.217510   27242 main.go:141] libmachine: (ha-856893-m03)   
	I0703 23:07:07.217527   27242 main.go:141] libmachine: (ha-856893-m03)   </cpu>
	I0703 23:07:07.217543   27242 main.go:141] libmachine: (ha-856893-m03)   <os>
	I0703 23:07:07.217559   27242 main.go:141] libmachine: (ha-856893-m03)     <type>hvm</type>
	I0703 23:07:07.217570   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='cdrom'/>
	I0703 23:07:07.217575   27242 main.go:141] libmachine: (ha-856893-m03)     <boot dev='hd'/>
	I0703 23:07:07.217583   27242 main.go:141] libmachine: (ha-856893-m03)     <bootmenu enable='no'/>
	I0703 23:07:07.217591   27242 main.go:141] libmachine: (ha-856893-m03)   </os>
	I0703 23:07:07.217599   27242 main.go:141] libmachine: (ha-856893-m03)   <devices>
	I0703 23:07:07.217604   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='cdrom'>
	I0703 23:07:07.217614   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/boot2docker.iso'/>
	I0703 23:07:07.217621   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hdc' bus='scsi'/>
	I0703 23:07:07.217635   27242 main.go:141] libmachine: (ha-856893-m03)       <readonly/>
	I0703 23:07:07.217651   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217665   27242 main.go:141] libmachine: (ha-856893-m03)     <disk type='file' device='disk'>
	I0703 23:07:07.217676   27242 main.go:141] libmachine: (ha-856893-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:07:07.217694   27242 main.go:141] libmachine: (ha-856893-m03)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/ha-856893-m03.rawdisk'/>
	I0703 23:07:07.217706   27242 main.go:141] libmachine: (ha-856893-m03)       <target dev='hda' bus='virtio'/>
	I0703 23:07:07.217718   27242 main.go:141] libmachine: (ha-856893-m03)     </disk>
	I0703 23:07:07.217733   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217747   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='mk-ha-856893'/>
	I0703 23:07:07.217757   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217767   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217778   27242 main.go:141] libmachine: (ha-856893-m03)     <interface type='network'>
	I0703 23:07:07.217804   27242 main.go:141] libmachine: (ha-856893-m03)       <source network='default'/>
	I0703 23:07:07.217821   27242 main.go:141] libmachine: (ha-856893-m03)       <model type='virtio'/>
	I0703 23:07:07.217830   27242 main.go:141] libmachine: (ha-856893-m03)     </interface>
	I0703 23:07:07.217837   27242 main.go:141] libmachine: (ha-856893-m03)     <serial type='pty'>
	I0703 23:07:07.217844   27242 main.go:141] libmachine: (ha-856893-m03)       <target port='0'/>
	I0703 23:07:07.217853   27242 main.go:141] libmachine: (ha-856893-m03)     </serial>
	I0703 23:07:07.217862   27242 main.go:141] libmachine: (ha-856893-m03)     <console type='pty'>
	I0703 23:07:07.217873   27242 main.go:141] libmachine: (ha-856893-m03)       <target type='serial' port='0'/>
	I0703 23:07:07.217883   27242 main.go:141] libmachine: (ha-856893-m03)     </console>
	I0703 23:07:07.217893   27242 main.go:141] libmachine: (ha-856893-m03)     <rng model='virtio'>
	I0703 23:07:07.217903   27242 main.go:141] libmachine: (ha-856893-m03)       <backend model='random'>/dev/random</backend>
	I0703 23:07:07.217917   27242 main.go:141] libmachine: (ha-856893-m03)     </rng>
	I0703 23:07:07.217941   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217959   27242 main.go:141] libmachine: (ha-856893-m03)     
	I0703 23:07:07.217972   27242 main.go:141] libmachine: (ha-856893-m03)   </devices>
	I0703 23:07:07.217982   27242 main.go:141] libmachine: (ha-856893-m03) </domain>
	I0703 23:07:07.217997   27242 main.go:141] libmachine: (ha-856893-m03) 
	I0703 23:07:07.224727   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:c9:f0:2c in network default
	I0703 23:07:07.225301   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:07.225318   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring networks are active...
	I0703 23:07:07.226041   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network default is active
	I0703 23:07:07.226394   27242 main.go:141] libmachine: (ha-856893-m03) Ensuring network mk-ha-856893 is active
	I0703 23:07:07.226752   27242 main.go:141] libmachine: (ha-856893-m03) Getting domain xml...
	I0703 23:07:07.227531   27242 main.go:141] libmachine: (ha-856893-m03) Creating domain...
	I0703 23:07:08.474940   27242 main.go:141] libmachine: (ha-856893-m03) Waiting to get IP...
	I0703 23:07:08.475929   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.476406   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.476429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.476388   28071 retry.go:31] will retry after 297.28942ms: waiting for machine to come up
	I0703 23:07:08.775075   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:08.775657   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:08.775687   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:08.775611   28071 retry.go:31] will retry after 260.487003ms: waiting for machine to come up
	I0703 23:07:09.038093   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.038543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.038570   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.038494   28071 retry.go:31] will retry after 356.550698ms: waiting for machine to come up
	I0703 23:07:09.396841   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.397258   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.397282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.397203   28071 retry.go:31] will retry after 565.372677ms: waiting for machine to come up
	I0703 23:07:09.963728   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:09.964167   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:09.964188   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:09.964122   28071 retry.go:31] will retry after 573.536697ms: waiting for machine to come up
	I0703 23:07:10.539640   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:10.540032   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:10.540082   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:10.540012   28071 retry.go:31] will retry after 887.46227ms: waiting for machine to come up
	I0703 23:07:11.430282   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:11.430740   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:11.430768   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:11.430695   28071 retry.go:31] will retry after 941.491473ms: waiting for machine to come up
	I0703 23:07:12.373968   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:12.374294   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:12.374322   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:12.374269   28071 retry.go:31] will retry after 1.104133505s: waiting for machine to come up
	I0703 23:07:13.479543   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:13.480022   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:13.480050   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:13.479968   28071 retry.go:31] will retry after 1.21416202s: waiting for machine to come up
	I0703 23:07:14.696397   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:14.696937   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:14.696966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:14.696888   28071 retry.go:31] will retry after 1.787823566s: waiting for machine to come up
	I0703 23:07:16.486978   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:16.487567   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:16.487594   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:16.487515   28071 retry.go:31] will retry after 2.71693532s: waiting for machine to come up
	I0703 23:07:19.206063   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:19.206532   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:19.206556   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:19.206496   28071 retry.go:31] will retry after 2.779815264s: waiting for machine to come up
	I0703 23:07:21.987373   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:21.987801   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:21.987822   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:21.987757   28071 retry.go:31] will retry after 4.466413602s: waiting for machine to come up
	I0703 23:07:26.457850   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:26.458259   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find current IP address of domain ha-856893-m03 in network mk-ha-856893
	I0703 23:07:26.458289   27242 main.go:141] libmachine: (ha-856893-m03) DBG | I0703 23:07:26.458211   28071 retry.go:31] will retry after 4.340225073s: waiting for machine to come up
	I0703 23:07:30.801191   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801617   27242 main.go:141] libmachine: (ha-856893-m03) Found IP for machine: 192.168.39.186
	I0703 23:07:30.801638   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has current primary IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.801645   27242 main.go:141] libmachine: (ha-856893-m03) Reserving static IP address...
	I0703 23:07:30.801999   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "ha-856893-m03", mac: "52:54:00:cb:e8:37", ip: "192.168.39.186"} in network mk-ha-856893
	I0703 23:07:30.882616   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:30.882638   27242 main.go:141] libmachine: (ha-856893-m03) Reserved static IP address: 192.168.39.186
	I0703 23:07:30.882649   27242 main.go:141] libmachine: (ha-856893-m03) Waiting for SSH to be available...
	I0703 23:07:30.885337   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:30.885691   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893
	I0703 23:07:30.885733   27242 main.go:141] libmachine: (ha-856893-m03) DBG | unable to find defined IP address of network mk-ha-856893 interface with MAC address 52:54:00:cb:e8:37
	I0703 23:07:30.885860   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:30.885892   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:30.885924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:30.885938   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:30.885954   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:30.889872   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:07:30.889897   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:07:30.889906   27242 main.go:141] libmachine: (ha-856893-m03) DBG | command : exit 0
	I0703 23:07:30.889912   27242 main.go:141] libmachine: (ha-856893-m03) DBG | err     : exit status 255
	I0703 23:07:30.889924   27242 main.go:141] libmachine: (ha-856893-m03) DBG | output  : 
	I0703 23:07:33.891677   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Getting to WaitForSSH function...
	I0703 23:07:33.894047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894452   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:33.894489   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:33.894620   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH client type: external
	I0703 23:07:33.894646   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa (-rw-------)
	I0703 23:07:33.894674   27242 main.go:141] libmachine: (ha-856893-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:07:33.894692   27242 main.go:141] libmachine: (ha-856893-m03) DBG | About to run SSH command:
	I0703 23:07:33.894713   27242 main.go:141] libmachine: (ha-856893-m03) DBG | exit 0
	I0703 23:07:34.020118   27242 main.go:141] libmachine: (ha-856893-m03) DBG | SSH cmd err, output: <nil>: 
	I0703 23:07:34.020375   27242 main.go:141] libmachine: (ha-856893-m03) KVM machine creation complete!
	I0703 23:07:34.020757   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:34.021289   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021526   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:34.021689   27242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:07:34.021707   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetState
	I0703 23:07:34.023123   27242 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:07:34.023138   27242 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:07:34.023143   27242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:07:34.023149   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.025507   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.025894   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.025914   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.026099   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.026281   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026437   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.026592   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.026726   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.026934   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.026944   27242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:07:34.135745   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.135768   27242 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:07:34.135780   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.138736   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139145   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.139180   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.139394   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.139768   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.139989   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.140173   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.140391   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.140627   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.140645   27242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:07:34.252832   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:07:34.252930   27242 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:07:34.252950   27242 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:07:34.252959   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253225   27242 buildroot.go:166] provisioning hostname "ha-856893-m03"
	I0703 23:07:34.253251   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.253430   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.256044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256422   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.256449   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.256567   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.256736   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.256887   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.257011   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.257189   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.257390   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.257403   27242 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893-m03 && echo "ha-856893-m03" | sudo tee /etc/hostname
	I0703 23:07:34.378754   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893-m03
	
	I0703 23:07:34.378782   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.381654   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.381966   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.382002   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.382235   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.382443   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382616   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.382798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.382982   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.383164   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.383188   27242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:07:34.499458   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:07:34.499488   27242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:07:34.499506   27242 buildroot.go:174] setting up certificates
	I0703 23:07:34.499514   27242 provision.go:84] configureAuth start
	I0703 23:07:34.499522   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetMachineName
	I0703 23:07:34.499784   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:34.503044   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503446   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.503473   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.503688   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.506053   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506402   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.506429   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.506591   27242 provision.go:143] copyHostCerts
	I0703 23:07:34.506619   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506654   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:07:34.506666   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:07:34.506747   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:07:34.506861   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506886   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:07:34.506891   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:07:34.506928   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:07:34.506984   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507007   27242 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:07:34.507016   27242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:07:34.507046   27242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:07:34.507111   27242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893-m03 san=[127.0.0.1 192.168.39.186 ha-856893-m03 localhost minikube]
	I0703 23:07:34.691119   27242 provision.go:177] copyRemoteCerts
	I0703 23:07:34.691175   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:07:34.691195   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.693763   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694102   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.694129   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.694311   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.694502   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.694665   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.694864   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:34.778514   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:07:34.778586   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:07:34.805663   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:07:34.805731   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:07:34.834448   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:07:34.834507   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:07:34.863423   27242 provision.go:87] duration metric: took 363.896644ms to configureAuth
	I0703 23:07:34.863450   27242 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:07:34.863660   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:34.863743   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:34.866154   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866486   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:34.866518   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:34.866663   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:34.866918   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867093   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:34.867227   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:34.867371   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:34.867582   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:34.867596   27242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:07:35.163731   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:07:35.163761   27242 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:07:35.163770   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetURL
	I0703 23:07:35.165134   27242 main.go:141] libmachine: (ha-856893-m03) DBG | Using libvirt version 6000000
	I0703 23:07:35.167475   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.167858   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.167903   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.168131   27242 main.go:141] libmachine: Docker is up and running!
	I0703 23:07:35.168152   27242 main.go:141] libmachine: Reticulating splines...
	I0703 23:07:35.168160   27242 client.go:171] duration metric: took 28.327898073s to LocalClient.Create
	I0703 23:07:35.168185   27242 start.go:167] duration metric: took 28.327960056s to libmachine.API.Create "ha-856893"
	I0703 23:07:35.168196   27242 start.go:293] postStartSetup for "ha-856893-m03" (driver="kvm2")
	I0703 23:07:35.168208   27242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:07:35.168229   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.168465   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:07:35.168488   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.170847   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171220   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.171254   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.171456   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.171671   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.171851   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.172018   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.255274   27242 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:07:35.260351   27242 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:07:35.260377   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:07:35.260467   27242 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:07:35.260568   27242 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:07:35.260583   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:07:35.260687   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:07:35.272083   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:35.299979   27242 start.go:296] duration metric: took 131.767901ms for postStartSetup
	I0703 23:07:35.300032   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetConfigRaw
	I0703 23:07:35.300664   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.303344   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.303779   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.303810   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.304247   27242 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:07:35.304465   27242 start.go:128] duration metric: took 28.482160498s to createHost
	I0703 23:07:35.304487   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.307047   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307392   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.307420   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.307576   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.307798   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308015   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.308182   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.308380   27242 main.go:141] libmachine: Using SSH client type: native
	I0703 23:07:35.308593   27242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0703 23:07:35.308607   27242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:07:35.420983   27242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720048055.401183800
	
	I0703 23:07:35.421004   27242 fix.go:216] guest clock: 1720048055.401183800
	I0703 23:07:35.421014   27242 fix.go:229] Guest: 2024-07-03 23:07:35.4011838 +0000 UTC Remote: 2024-07-03 23:07:35.304476938 +0000 UTC m=+166.034732868 (delta=96.706862ms)
	I0703 23:07:35.421033   27242 fix.go:200] guest clock delta is within tolerance: 96.706862ms
	I0703 23:07:35.421039   27242 start.go:83] releasing machines lock for "ha-856893-m03", held for 28.598837371s
	I0703 23:07:35.421065   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.421372   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:35.424018   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.424405   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.424434   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.426624   27242 out.go:177] * Found network options:
	I0703 23:07:35.427853   27242 out.go:177]   - NO_PROXY=192.168.39.172,192.168.39.157
	W0703 23:07:35.428985   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.429002   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.429017   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429617   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429822   27242 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:07:35.429928   27242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:07:35.429966   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	W0703 23:07:35.429991   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	W0703 23:07:35.430012   27242 proxy.go:119] fail to check proxy env: Error ip not in block
	I0703 23:07:35.430073   27242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:07:35.430097   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:07:35.433231   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433256   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433599   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433639   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433688   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:35.433738   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:35.433819   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.433836   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:07:35.434034   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434104   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:07:35.434184   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434316   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:07:35.434344   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.434511   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:07:35.677657   27242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:07:35.684280   27242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:07:35.684340   27242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:07:35.700677   27242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:07:35.700696   27242 start.go:494] detecting cgroup driver to use...
	I0703 23:07:35.700755   27242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:07:35.716908   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:07:35.731925   27242 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:07:35.731993   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:07:35.747595   27242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:07:35.763296   27242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:07:35.878408   27242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:07:36.053007   27242 docker.go:233] disabling docker service ...
	I0703 23:07:36.053096   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:07:36.069537   27242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:07:36.084154   27242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:07:36.219803   27242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:07:36.349909   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:07:36.365327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:07:36.386397   27242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:07:36.386449   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.398525   27242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:07:36.398584   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.410492   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.422111   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.433451   27242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:07:36.445276   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.456898   27242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.477619   27242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:07:36.489825   27242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:07:36.501128   27242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:07:36.501191   27242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:07:36.516569   27242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:07:36.527341   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:36.659461   27242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:07:36.809855   27242 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:07:36.809927   27242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:07:36.815110   27242 start.go:562] Will wait 60s for crictl version
	I0703 23:07:36.815186   27242 ssh_runner.go:195] Run: which crictl
	I0703 23:07:36.819348   27242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:07:36.866612   27242 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:07:36.866700   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.896618   27242 ssh_runner.go:195] Run: crio --version
	I0703 23:07:36.932621   27242 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:07:36.933935   27242 out.go:177]   - env NO_PROXY=192.168.39.172
	I0703 23:07:36.935273   27242 out.go:177]   - env NO_PROXY=192.168.39.172,192.168.39.157
	I0703 23:07:36.936545   27242 main.go:141] libmachine: (ha-856893-m03) Calling .GetIP
	I0703 23:07:36.939214   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939560   27242 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:07:36.939587   27242 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:07:36.939811   27242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:07:36.944619   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:36.957968   27242 mustload.go:65] Loading cluster: ha-856893
	I0703 23:07:36.958224   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:07:36.958474   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.958515   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.973765   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0703 23:07:36.974194   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.974697   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.974717   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.975026   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.975263   27242 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:07:36.976873   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:36.977188   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:36.977223   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:36.992987   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0703 23:07:36.993384   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:36.993860   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:36.993887   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:36.994194   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:36.994378   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:36.994557   27242 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.186
	I0703 23:07:36.994567   27242 certs.go:194] generating shared ca certs ...
	I0703 23:07:36.994580   27242 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:36.994707   27242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:07:36.994743   27242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:07:36.994752   27242 certs.go:256] generating profile certs ...
	I0703 23:07:36.994817   27242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:07:36.994840   27242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228
	I0703 23:07:36.994854   27242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.186 192.168.39.254]
	I0703 23:07:37.337183   27242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 ...
	I0703 23:07:37.337219   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228: {Name:mk67b34580ae56e313e039e356b49a596df2616e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337409   27242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 ...
	I0703 23:07:37.337428   27242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228: {Name:mk926f699ebfb8cd1cc65b70f9375a71b834773b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:07:37.337526   27242 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:07:37.337675   27242 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.65668228 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:07:37.337825   27242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:07:37.337842   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:07:37.337858   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:07:37.337874   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:07:37.337893   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:07:37.337911   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:07:37.337929   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:07:37.337945   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:07:37.337962   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:07:37.338026   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:07:37.338066   27242 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:07:37.338079   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:07:37.338112   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:07:37.338144   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:07:37.338183   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:07:37.338236   27242 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:07:37.338272   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:07:37.338293   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.338311   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:37.338353   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:37.341309   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341713   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:37.341753   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:37.341942   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:37.342152   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:37.342311   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:37.342478   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:37.416222   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0703 23:07:37.421398   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0703 23:07:37.433219   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0703 23:07:37.438229   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0703 23:07:37.450051   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0703 23:07:37.454475   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0703 23:07:37.465922   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0703 23:07:37.470453   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0703 23:07:37.482305   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0703 23:07:37.486680   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0703 23:07:37.498268   27242 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0703 23:07:37.503288   27242 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0703 23:07:37.515695   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:07:37.543420   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:07:37.571775   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:07:37.601487   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:07:37.630721   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0703 23:07:37.665301   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:07:37.692166   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:07:37.719787   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:07:37.751460   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:07:37.778803   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:07:37.805997   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:07:37.832086   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0703 23:07:37.850763   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0703 23:07:37.869670   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0703 23:07:37.888584   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0703 23:07:37.906796   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0703 23:07:37.924790   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0703 23:07:37.943082   27242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0703 23:07:37.963450   27242 ssh_runner.go:195] Run: openssl version
	I0703 23:07:37.970013   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:07:37.981740   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986778   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.986831   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:07:37.993242   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:07:38.004656   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:07:38.016695   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021674   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.021728   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:07:38.027634   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:07:38.039118   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:07:38.050655   27242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055464   27242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.055548   27242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:07:38.061625   27242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:07:38.073265   27242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:07:38.078693   27242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:07:38.078753   27242 kubeadm.go:928] updating node {m03 192.168.39.186 8443 v1.30.2 crio true true} ...
	I0703 23:07:38.078862   27242 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:07:38.078895   27242 kube-vip.go:115] generating kube-vip config ...
	I0703 23:07:38.078937   27242 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:07:38.096141   27242 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:07:38.096245   27242 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:07:38.096299   27242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.107262   27242 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0703 23:07:38.107316   27242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0703 23:07:38.118852   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0703 23:07:38.118915   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:07:38.118922   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0703 23:07:38.118857   27242 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0703 23:07:38.118960   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.119033   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0703 23:07:38.118941   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.119135   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0703 23:07:38.137934   27242 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.137967   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0703 23:07:38.137996   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0703 23:07:38.137999   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0703 23:07:38.138014   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0703 23:07:38.138057   27242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0703 23:07:38.149338   27242 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0703 23:07:38.149380   27242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0703 23:07:39.190629   27242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0703 23:07:39.200854   27242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0703 23:07:39.219472   27242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:07:39.238369   27242 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:07:39.256931   27242 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:07:39.261281   27242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:07:39.275182   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:07:39.397746   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:07:39.415272   27242 host.go:66] Checking if "ha-856893" exists ...
	I0703 23:07:39.415637   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:07:39.415672   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:07:39.432698   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0703 23:07:39.433090   27242 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:07:39.433538   27242 main.go:141] libmachine: Using API Version  1
	I0703 23:07:39.433562   27242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:07:39.433859   27242 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:07:39.434046   27242 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:07:39.434186   27242 start.go:316] joinCluster: &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:07:39.434327   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0703 23:07:39.434341   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:07:39.437296   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437726   27242 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:07:39.437760   27242 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:07:39.437962   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:07:39.438140   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:07:39.438348   27242 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:07:39.438503   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:07:39.593405   27242 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:07:39.593461   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I0703 23:08:02.813599   27242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bwnzkl.tqjqj6bgpj1edijr --discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-856893-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (23.220101132s)
	I0703 23:08:02.813663   27242 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0703 23:08:03.385422   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-856893-m03 minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=ha-856893 minikube.k8s.io/primary=false
	I0703 23:08:03.515792   27242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-856893-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0703 23:08:03.619588   27242 start.go:318] duration metric: took 24.185396632s to joinCluster
	I0703 23:08:03.619710   27242 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:08:03.620031   27242 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:08:03.621348   27242 out.go:177] * Verifying Kubernetes components...
	I0703 23:08:03.622685   27242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:08:03.881282   27242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:08:03.907961   27242 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:08:03.908243   27242 kapi.go:59] client config for ha-856893: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.crt", KeyFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key", CAFile:"/home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfd900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0703 23:08:03.908323   27242 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.172:8443
	I0703 23:08:03.908583   27242 node_ready.go:35] waiting up to 6m0s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:03.908688   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:03.908697   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:03.908707   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:03.908713   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:03.912712   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:04.408879   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.408907   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.408919   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.408925   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.414154   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:04.909645   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:04.909672   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:04.909683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:04.909689   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:04.914163   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.409099   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.409119   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.409127   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.409131   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.413290   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.908819   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:05.908842   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:05.908849   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:05.908853   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:05.913655   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:05.914382   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:06.409134   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.409160   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.409170   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.409175   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.412666   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:06.909606   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:06.909627   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:06.909637   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:06.909645   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:06.913376   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:07.409370   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.409394   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.409408   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.409414   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.416499   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:07.909141   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:07.909171   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:07.909181   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:07.909186   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:07.914036   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:07.914974   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:08.409386   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.409412   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.409423   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.409441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.413022   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:08.909609   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:08.909634   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:08.909646   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:08.909651   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:08.913449   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:09.409635   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.409658   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.409669   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.409675   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.413889   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:09.909448   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:09.909468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:09.909477   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:09.909482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:09.913589   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:10.409105   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.409125   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.409134   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.409139   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.412940   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.413603   27242 node_ready.go:53] node "ha-856893-m03" has status "Ready":"False"
	I0703 23:08:10.909037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:10.909064   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.909075   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.909081   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916194   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:10.916783   27242 node_ready.go:49] node "ha-856893-m03" has status "Ready":"True"
	I0703 23:08:10.916802   27242 node_ready.go:38] duration metric: took 7.008205065s for node "ha-856893-m03" to be "Ready" ...
	I0703 23:08:10.916818   27242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:10.916888   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:10.916897   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.916904   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.916912   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.923686   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:10.929901   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.930006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n5tdf
	I0703 23:08:10.930018   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.930028   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.930034   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.933138   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.933987   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.934003   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.934020   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.934026   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.937163   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:10.937765   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.937784   27242 pod_ready.go:81] duration metric: took 7.857453ms for pod "coredns-7db6d8ff4d-n5tdf" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937795   27242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.937850   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pwqfl
	I0703 23:08:10.937858   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.937865   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.937872   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.940806   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.941415   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.941431   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.941441   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.941446   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.944345   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.944919   27242 pod_ready.go:92] pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.944938   27242 pod_ready.go:81] duration metric: took 7.136212ms for pod "coredns-7db6d8ff4d-pwqfl" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944947   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.944993   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893
	I0703 23:08:10.945001   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.945008   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.945011   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.947818   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.948517   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:10.948534   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.948544   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.948552   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.951211   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.951848   27242 pod_ready.go:92] pod "etcd-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.951863   27242 pod_ready.go:81] duration metric: took 6.910613ms for pod "etcd-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951888   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.951954   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m02
	I0703 23:08:10.951965   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.951974   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.951980   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.954591   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.955176   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:10.955193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:10.955202   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:10.955208   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:10.957501   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:10.958008   27242 pod_ready.go:92] pod "etcd-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:10.958025   27242 pod_ready.go:81] duration metric: took 6.129203ms for pod "etcd-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:10.958033   27242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:11.109948   27242 request.go:629] Waited for 151.854764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110037   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.110047   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.110057   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.110067   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.115838   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.309816   27242 request.go:629] Waited for 193.188796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.309878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.309886   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.309892   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.313593   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.509365   27242 request.go:629] Waited for 50.202967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509465   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.509477   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.509489   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.509500   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.514572   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:11.709248   27242 request.go:629] Waited for 193.32848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709299   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:11.709304   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.709325   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.709333   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.713036   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:11.959125   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:11.959147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:11.959155   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:11.959160   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:11.963102   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.109001   27242 request.go:629] Waited for 144.798659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109057   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.109062   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.109071   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.109077   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.112847   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.458780   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.458804   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.458816   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.458822   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.462522   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.509515   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.509539   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.509550   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.509556   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.513776   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.958862   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:12.958884   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.958892   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.958896   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.963076   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:12.964032   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:12.964055   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:12.964066   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:12.964072   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:12.967555   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:12.968207   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:13.458279   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.458306   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.458322   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.458327   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.461824   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:13.462472   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.462489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.462497   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.462506   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.465331   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:13.958289   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:13.958310   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.958318   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.958324   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.962681   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:13.963320   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:13.963333   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:13.963340   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:13.963344   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:13.966600   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.458259   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.458282   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.458290   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.458293   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.462012   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.462555   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.462570   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.462577   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.462581   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.465499   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:14.959177   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:14.959199   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.959207   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.959212   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.962396   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:14.963280   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:14.963296   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:14.963304   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:14.963309   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:14.966765   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.459098   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.459127   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.459137   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.459142   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.462880   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.463536   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.463554   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.463565   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.463573   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.466897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:15.467438   27242 pod_ready.go:102] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"False"
	I0703 23:08:15.958824   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:15.958850   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.958862   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.958870   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.964122   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:15.964870   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:15.964888   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:15.964896   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:15.964900   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:15.967828   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.459240   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-ha-856893-m03
	I0703 23:08:16.459265   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.459275   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.459283   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.462430   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.463285   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.463301   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.463308   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.463312   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.466431   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.467055   27242 pod_ready.go:92] pod "etcd-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.467074   27242 pod_ready.go:81] duration metric: took 5.509032519s for pod "etcd-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467090   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.467139   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893
	I0703 23:08:16.467147   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.467154   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.467159   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470113   27242 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0703 23:08:16.470753   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:16.470768   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.470775   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.470781   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.479436   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:16.479957   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.479976   27242 pod_ready.go:81] duration metric: took 12.880584ms for pod "kube-apiserver-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.479986   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.480043   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02
	I0703 23:08:16.480051   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.480058   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.480068   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.483359   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.509453   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:16.509489   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.509499   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.509506   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.514051   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.514499   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.514518   27242 pod_ready.go:81] duration metric: took 34.526271ms for pod "kube-apiserver-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.514527   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.709759   27242 request.go:629] Waited for 195.170406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709834   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m03
	I0703 23:08:16.709841   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.709851   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.709858   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.714113   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:16.909343   27242 request.go:629] Waited for 194.383103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909408   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:16.909416   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:16.909426   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:16.909432   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:16.912650   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:16.913346   27242 pod_ready.go:92] pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:16.913369   27242 pod_ready.go:81] duration metric: took 398.834831ms for pod "kube-apiserver-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:16.913384   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.109258   27242 request.go:629] Waited for 195.812463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109335   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893
	I0703 23:08:17.109344   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.109351   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.109360   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.113410   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.309479   27242 request.go:629] Waited for 195.262429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309542   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:17.309551   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.309559   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.309563   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.313791   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:17.314385   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.314404   27242 pod_ready.go:81] duration metric: took 401.012331ms for pod "kube-controller-manager-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.314414   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.509531   27242 request.go:629] Waited for 195.056137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509605   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m02
	I0703 23:08:17.509611   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.509620   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.509625   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.513357   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.709477   27242 request.go:629] Waited for 195.370636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709535   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:17.709542   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.709553   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.709564   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.713345   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:17.713850   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:17.713874   27242 pod_ready.go:81] duration metric: took 399.45315ms for pod "kube-controller-manager-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.713889   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:17.909947   27242 request.go:629] Waited for 195.968544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910018   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:17.910023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:17.910030   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:17.910037   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:17.913897   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.109846   27242 request.go:629] Waited for 195.376393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109896   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.109901   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.109910   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.109916   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.113762   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.309532   27242 request.go:629] Waited for 95.294007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309604   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.309616   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.309631   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.309641   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.313751   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.509885   27242 request.go:629] Waited for 195.399896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509978   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.509991   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.510000   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.510009   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.514418   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:18.714234   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:18.714255   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.714263   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.714266   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.717923   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:18.909739   27242 request.go:629] Waited for 191.248143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909790   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:18.909795   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:18.909801   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:18.909804   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:18.916518   27242 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0703 23:08:19.214106   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.214126   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.214134   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.214139   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.217700   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.309750   27242 request.go:629] Waited for 91.33378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309811   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.309818   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.309827   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.309832   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.314568   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:19.714371   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-856893-m03
	I0703 23:08:19.714395   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.714403   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.714407   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.717735   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.718452   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:19.718468   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.718475   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.718480   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.722349   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:19.722906   27242 pod_ready.go:92] pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:19.722923   27242 pod_ready.go:81] duration metric: took 2.009027669s for pod "kube-controller-manager-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.722933   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:19.909367   27242 request.go:629] Waited for 186.370383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52zqj
	I0703 23:08:19.909471   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:19.909482   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:19.909487   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:19.913236   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.109762   27242 request.go:629] Waited for 195.344765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109853   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:20.109861   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.109872   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.109883   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.114021   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.114608   27242 pod_ready.go:92] pod "kube-proxy-52zqj" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.114627   27242 pod_ready.go:81] duration metric: took 391.688117ms for pod "kube-proxy-52zqj" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.114636   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.309372   27242 request.go:629] Waited for 194.665348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309436   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkwrn
	I0703 23:08:20.309446   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.309454   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.309462   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.313429   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.509612   27242 request.go:629] Waited for 195.389962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509670   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:20.509676   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.509683   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.509687   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.513278   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.513970   27242 pod_ready.go:92] pod "kube-proxy-gkwrn" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.513988   27242 pod_ready.go:81] duration metric: took 399.344201ms for pod "kube-proxy-gkwrn" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.514002   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.710051   27242 request.go:629] Waited for 195.979482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710148   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stq26
	I0703 23:08:20.710158   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.710166   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.710170   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.714583   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:20.909948   27242 request.go:629] Waited for 194.287257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910006   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:20.910011   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:20.910018   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:20.910023   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:20.913833   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:20.914294   27242 pod_ready.go:92] pod "kube-proxy-stq26" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:20.914312   27242 pod_ready.go:81] duration metric: took 400.304119ms for pod "kube-proxy-stq26" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:20.914322   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.109389   27242 request.go:629] Waited for 194.990561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109459   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893
	I0703 23:08:21.109469   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.109482   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.109488   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.114937   27242 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0703 23:08:21.309870   27242 request.go:629] Waited for 194.409083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309938   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893
	I0703 23:08:21.309944   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.309951   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.309956   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.314789   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:21.315856   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.315905   27242 pod_ready.go:81] duration metric: took 401.575237ms for pod "kube-scheduler-ha-856893" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.315918   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.509959   27242 request.go:629] Waited for 193.98282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510017   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m02
	I0703 23:08:21.510023   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.510033   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.510039   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.513857   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.709794   27242 request.go:629] Waited for 195.374395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709856   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m02
	I0703 23:08:21.709863   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.709888   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.709893   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.713692   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:21.714469   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:21.714501   27242 pod_ready.go:81] duration metric: took 398.575885ms for pod "kube-scheduler-ha-856893-m02" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.714514   27242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:21.909971   27242 request.go:629] Waited for 195.381878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910060   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-856893-m03
	I0703 23:08:21.910068   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:21.910078   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:21.910085   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:21.914034   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.109540   27242 request.go:629] Waited for 194.902506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109621   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/ha-856893-m03
	I0703 23:08:22.109629   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.109638   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.109644   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.113703   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.114348   27242 pod_ready.go:92] pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace has status "Ready":"True"
	I0703 23:08:22.114368   27242 pod_ready.go:81] duration metric: took 399.84796ms for pod "kube-scheduler-ha-856893-m03" in "kube-system" namespace to be "Ready" ...
	I0703 23:08:22.114380   27242 pod_ready.go:38] duration metric: took 11.197545891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:08:22.114405   27242 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:08:22.114465   27242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:08:22.132505   27242 api_server.go:72] duration metric: took 18.512751964s to wait for apiserver process to appear ...
	I0703 23:08:22.132533   27242 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:08:22.132561   27242 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0703 23:08:22.137340   27242 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0703 23:08:22.137434   27242 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I0703 23:08:22.137445   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.137453   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.137457   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.138593   27242 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0703 23:08:22.138733   27242 api_server.go:141] control plane version: v1.30.2
	I0703 23:08:22.138758   27242 api_server.go:131] duration metric: took 6.217378ms to wait for apiserver health ...
	I0703 23:08:22.138774   27242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:08:22.309132   27242 request.go:629] Waited for 170.284558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309188   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.309193   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.309200   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.309204   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.317229   27242 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0703 23:08:22.325849   27242 system_pods.go:59] 24 kube-system pods found
	I0703 23:08:22.325890   27242 system_pods.go:61] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.325895   27242 system_pods.go:61] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.325899   27242 system_pods.go:61] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.325902   27242 system_pods.go:61] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.325906   27242 system_pods.go:61] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.325909   27242 system_pods.go:61] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.325912   27242 system_pods.go:61] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.325914   27242 system_pods.go:61] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.325917   27242 system_pods.go:61] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.325920   27242 system_pods.go:61] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.325924   27242 system_pods.go:61] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.325927   27242 system_pods.go:61] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.325930   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.325933   27242 system_pods.go:61] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.325936   27242 system_pods.go:61] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.325940   27242 system_pods.go:61] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.325943   27242 system_pods.go:61] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.325946   27242 system_pods.go:61] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.325949   27242 system_pods.go:61] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.325952   27242 system_pods.go:61] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.325954   27242 system_pods.go:61] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.325958   27242 system_pods.go:61] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.325960   27242 system_pods.go:61] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.325963   27242 system_pods.go:61] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.325970   27242 system_pods.go:74] duration metric: took 187.186303ms to wait for pod list to return data ...
	I0703 23:08:22.325985   27242 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:08:22.509121   27242 request.go:629] Waited for 183.060695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509193   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I0703 23:08:22.509200   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.509210   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.509218   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.512726   27242 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0703 23:08:22.512854   27242 default_sa.go:45] found service account: "default"
	I0703 23:08:22.512879   27242 default_sa.go:55] duration metric: took 186.885116ms for default service account to be created ...
	I0703 23:08:22.512891   27242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:08:22.709312   27242 request.go:629] Waited for 196.355099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709392   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I0703 23:08:22.709401   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.709415   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.709425   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.717218   27242 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0703 23:08:22.725427   27242 system_pods.go:86] 24 kube-system pods found
	I0703 23:08:22.725459   27242 system_pods.go:89] "coredns-7db6d8ff4d-n5tdf" [8efbbc3c-e2d5-4f13-8672-cf7524f72e2d] Running
	I0703 23:08:22.725465   27242 system_pods.go:89] "coredns-7db6d8ff4d-pwqfl" [b4d22edf-e718-4755-b211-c8279481005e] Running
	I0703 23:08:22.725470   27242 system_pods.go:89] "etcd-ha-856893" [6b7ae8d7-b953-4a2a-8745-d672d5ef800d] Running
	I0703 23:08:22.725474   27242 system_pods.go:89] "etcd-ha-856893-m02" [c84132ba-b9c8-491c-a50d-ebe97039cb4a] Running
	I0703 23:08:22.725478   27242 system_pods.go:89] "etcd-ha-856893-m03" [5fb85989-093c-4239-a17e-761ac8c2f88c] Running
	I0703 23:08:22.725481   27242 system_pods.go:89] "kindnet-h7ntk" [18e6d992-2713-4399-a160-5f9196981f26] Running
	I0703 23:08:22.725485   27242 system_pods.go:89] "kindnet-rwqsq" [438780e1-32f3-4b9f-a85e-b6a9eeac268f] Running
	I0703 23:08:22.725489   27242 system_pods.go:89] "kindnet-vtd2b" [08f88183-a2c6-48b4-a14e-1c70ed08407a] Running
	I0703 23:08:22.725494   27242 system_pods.go:89] "kube-apiserver-ha-856893" [1818612d-9082-49ba-863e-25d530ad2893] Running
	I0703 23:08:22.725498   27242 system_pods.go:89] "kube-apiserver-ha-856893-m02" [09914e18-0b79-4722-9b14-a4b70d5ad800] Running
	I0703 23:08:22.725502   27242 system_pods.go:89] "kube-apiserver-ha-856893-m03" [d5ffdc07-8246-4c1b-848b-d103b69c96af] Running
	I0703 23:08:22.725506   27242 system_pods.go:89] "kube-controller-manager-ha-856893" [4906346a-6b42-4c32-9ea0-8cd61b06580b] Running
	I0703 23:08:22.725510   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m02" [e90cd411-c185-4652-8e21-6fd48fb2b5d1] Running
	I0703 23:08:22.725515   27242 system_pods.go:89] "kube-controller-manager-ha-856893-m03" [71730b30-6db1-4376-931d-adb83ec87278] Running
	I0703 23:08:22.725519   27242 system_pods.go:89] "kube-proxy-52zqj" [7cbc16d2-e9f6-487f-a974-0fa21e4163b5] Running
	I0703 23:08:22.725523   27242 system_pods.go:89] "kube-proxy-gkwrn" [5fefb775-6224-47be-ad73-443c957c8f69] Running
	I0703 23:08:22.725526   27242 system_pods.go:89] "kube-proxy-stq26" [55db1583-2020-4a52-ab80-2f92ab63463b] Running
	I0703 23:08:22.725530   27242 system_pods.go:89] "kube-scheduler-ha-856893" [a3f43eb4-e248-41a0-b86c-0564becadc2b] Running
	I0703 23:08:22.725535   27242 system_pods.go:89] "kube-scheduler-ha-856893-m02" [02f862e9-2b93-4f74-847f-32d753dd9456] Running
	I0703 23:08:22.725539   27242 system_pods.go:89] "kube-scheduler-ha-856893-m03" [5ebea99b-ad4c-414f-a5a2-6501823bfc22] Running
	I0703 23:08:22.725546   27242 system_pods.go:89] "kube-vip-ha-856893" [0c4a20fd-99f2-4d6a-a332-2a79e4431b88] Running
	I0703 23:08:22.725549   27242 system_pods.go:89] "kube-vip-ha-856893-m02" [560fb438-21bf-45bb-9cf0-38dd21b61c80] Running
	I0703 23:08:22.725552   27242 system_pods.go:89] "kube-vip-ha-856893-m03" [a4a2c5c7-c2c9-4910-8716-9f22a9a50611] Running
	I0703 23:08:22.725556   27242 system_pods.go:89] "storage-provisioner" [91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8] Running
	I0703 23:08:22.725561   27242 system_pods.go:126] duration metric: took 212.662262ms to wait for k8s-apps to be running ...
	I0703 23:08:22.725571   27242 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:08:22.725617   27242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:08:22.742416   27242 system_svc.go:56] duration metric: took 16.833939ms WaitForService to wait for kubelet
	I0703 23:08:22.742456   27242 kubeadm.go:576] duration metric: took 19.122705878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:08:22.742497   27242 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:08:22.909819   27242 request.go:629] Waited for 167.220159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909873   27242 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I0703 23:08:22.909878   27242 round_trippers.go:469] Request Headers:
	I0703 23:08:22.909886   27242 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0703 23:08:22.909890   27242 round_trippers.go:473]     Accept: application/json, */*
	I0703 23:08:22.914023   27242 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0703 23:08:22.915479   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915513   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915537   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915544   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915548   27242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:08:22.915554   27242 node_conditions.go:123] node cpu capacity is 2
	I0703 23:08:22.915559   27242 node_conditions.go:105] duration metric: took 173.056283ms to run NodePressure ...
	I0703 23:08:22.915576   27242 start.go:240] waiting for startup goroutines ...
	I0703 23:08:22.915610   27242 start.go:254] writing updated cluster config ...
	I0703 23:08:22.916020   27242 ssh_runner.go:195] Run: rm -f paused
	I0703 23:08:22.974944   27242 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 23:08:22.976700   27242 out.go:177] * Done! kubectl is now configured to use "ha-856893" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.879338120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0abc616-c0d5-4c29-a270-1a793acc5c16 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.879947457Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e36ad7a4-ee18-412c-83de-2e9d7d960857 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.879972117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048327879948482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0abc616-c0d5-4c29-a270-1a793acc5c16 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.880239410Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1720047921686086502,StartedAt:1720047921803977699,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7891a98db30710828591ae5169d05ec2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7891a98db30710828591ae5169d05ec2/containers/etcd/a205d51b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-856893_7891a9
8db30710828591ae5169d05ec2/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e36ad7a4-ee18-412c-83de-2e9d7d960857 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.881147467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2eed6346-6950-4386-a006-d6627388c4b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.881359954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2eed6346-6950-4386-a006-d6627388c4b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.881281693Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,Verbose:false,}" file="otel-collector/interceptors.go:62" id=212c4c8d-b5c9-4695-ae69-e5c3a364cf5f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.881839359Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1720047921664415026,StartedAt:1720047921816330546,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/18fee9f6b7b1f394539107bfaf70ec2c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/18fee9f6b7b1f394539107bfaf70ec2c/containers/kube-apiserver/3f6fa49d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/
minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-856893_18fee9f6b7b1f394539107bfaf70ec2c/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=212c4c8d-b5c9-4695-ae69-e5c3a364cf5f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.882041642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2eed6346-6950-4386-a006-d6627388c4b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.893619562Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=bdafc0d1-dc00-401d-961c-a65e02c1265e name=/runtime.v1.RuntimeService/Status
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.893725272Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bdafc0d1-dc00-401d-961c-a65e02c1265e name=/runtime.v1.RuntimeService/Status
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.925160294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5b3be0d-2648-4c6a-b4af-19e3ae35879a name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.925256377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5b3be0d-2648-4c6a-b4af-19e3ae35879a name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.926644472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36e054a3-194c-4f5b-a38f-b8a04bae896b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.927119464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048327927098210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36e054a3-194c-4f5b-a38f-b8a04bae896b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.927893613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=574221a6-dcb8-453c-9844-40472f04f9e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.927972636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574221a6-dcb8-453c-9844-40472f04f9e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.928203966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=574221a6-dcb8-453c-9844-40472f04f9e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.968798298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17258d64-e81b-410c-9251-464552fb65c8 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.968892619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17258d64-e81b-410c-9251-464552fb65c8 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.970457661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d99a1a6-69d7-4d43-a295-af776e53ea11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.971118199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048327971093136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d99a1a6-69d7-4d43-a295-af776e53ea11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.971852411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1985fa8c-5253-4594-ad07-8d3aa17ac339 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.971909037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1985fa8c-5253-4594-ad07-8d3aa17ac339 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:12:07 ha-856893 crio[680]: time="2024-07-03 23:12:07.972132471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048107151194001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973272089478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720047973246300187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e953066d642d95ea246b7978e66ebdb8247c588a239295bc0583d9be214d29,PodSandboxId:b1df838b768efce71daa9de41505a294ee70a6093019557a59aaa55a14c3fc0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1720047973201617835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720047942
480279746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71,PodSandboxId:fcb5b2ab8ad58b672e638c6b9da63d751cbdfe3fd0e6bb37e3a23aea1d820f5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720047941392249439,Labels:map[string]stri
ng{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e,PodSandboxId:ade6e7c92cc8277b503dd87c43275d73529d0ee66941939117baf2e511d016ec,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720047925155312936,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb5af725f761355c024282f684e2eaaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e,PodSandboxId:78f6147e8fcf30507346cd87408d724ee33acc5f36d9058c3c2df378780de214,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720047921657803736,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720047921587666055,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720047921542484183,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112,PodSandboxId:3f446507b3eb8a27dcf6453ca5dc495e1676669ca6fc52a5fec1d424bab1cb68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720047921541248993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1985fa8c-5253-4594-ad07-8d3aa17ac339 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d5f2f09a864e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2add57c6feb6d       busybox-fc5497c4f-hh5rx
	4b327b3ea68a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   52adb03e9908b       coredns-7db6d8ff4d-n5tdf
	ebac8426f222e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   75824b8079291       coredns-7db6d8ff4d-pwqfl
	e5e953066d642       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b1df838b768ef       storage-provisioner
	aea86e5699e84       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   17315e93de095       kube-proxy-52zqj
	7a5bd1ae2892a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      6 minutes ago       Running             kindnet-cni               0                   fcb5b2ab8ad58       kindnet-h7ntk
	4c81f0becbc3b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   ade6e7c92cc82       kube-vip-ha-856893
	227a9a4176778       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   78f6147e8fcf3       kube-controller-manager-ha-856893
	8ed8443e8784d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   a50d015125505       kube-scheduler-ha-856893
	194253df10dfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   bbcc0c1ac6390       etcd-ha-856893
	4c379ddaf9a49       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   3f446507b3eb8       kube-apiserver-ha-856893
	
	
	==> coredns [4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54] <==
	[INFO] 10.244.0.4:50532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072272s
	[INFO] 10.244.0.4:38183 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100508s
	[INFO] 10.244.0.4:40014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049781s
	[INFO] 10.244.1.2:43357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134408s
	[INFO] 10.244.1.2:33336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000185s
	[INFO] 10.244.1.2:43589 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174137s
	[INFO] 10.244.1.2:49376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106729s
	[INFO] 10.244.1.2:51691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271033s
	[INFO] 10.244.2.2:40310 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117383s
	[INFO] 10.244.2.2:38408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011442s
	[INFO] 10.244.2.2:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080741s
	[INFO] 10.244.0.4:60751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020875s
	[INFO] 10.244.0.4:42746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083559s
	[INFO] 10.244.1.2:46618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026488s
	[INFO] 10.244.1.2:46816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095128s
	[INFO] 10.244.2.2:35755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141347s
	[INFO] 10.244.2.2:37226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000441904s
	[INFO] 10.244.2.2:56990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123934s
	[INFO] 10.244.0.4:33260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228783s
	[INFO] 10.244.0.4:40825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089557s
	[INFO] 10.244.0.4:36029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284159s
	[INFO] 10.244.0.4:38025 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069908s
	[INFO] 10.244.1.2:33505 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000516657s
	[INFO] 10.244.1.2:51760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106766s
	[INFO] 10.244.1.2:48924 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111713s
	
	
	==> coredns [ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41451 - 39576 "HINFO IN 3941637866052819197.8807026029404487185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013851694s
	[INFO] 10.244.2.2:52714 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014862182s
	[INFO] 10.244.0.4:48924 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001898144s
	[INFO] 10.244.1.2:38357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235864s
	[INFO] 10.244.1.2:52654 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000207162s
	[INFO] 10.244.2.2:38149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003994489s
	[INFO] 10.244.2.2:37323 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162805s
	[INFO] 10.244.2.2:37370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170597s
	[INFO] 10.244.0.4:39154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140397s
	[INFO] 10.244.0.4:39807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002148429s
	[INFO] 10.244.0.4:52421 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189952s
	[INFO] 10.244.0.4:32927 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001716905s
	[INFO] 10.244.0.4:37077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064503s
	[INFO] 10.244.1.2:53622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138056s
	[INFO] 10.244.1.2:56863 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001413025s
	[INFO] 10.244.1.2:33669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000289179s
	[INFO] 10.244.2.2:46390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141967s
	[INFO] 10.244.0.4:47937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126136s
	[INFO] 10.244.0.4:40258 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058689s
	[INFO] 10.244.1.2:34579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112137s
	[INFO] 10.244.1.2:43318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087441s
	[INFO] 10.244.2.2:44839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154015s
	[INFO] 10.244.1.2:49628 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158345s
	
	
	==> describe nodes <==
	Name:               ha-856893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:31 +0000   Wed, 03 Jul 2024 23:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-856893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26831b612bd459ca285f71afd0636da
	  System UUID:                a26831b6-12bd-459c-a285-f71afd0636da
	  Boot ID:                    60d1e076-9358-4d45-bf73-662df78ab1a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hh5rx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 coredns-7db6d8ff4d-n5tdf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m28s
	  kube-system                 coredns-7db6d8ff4d-pwqfl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m27s
	  kube-system                 etcd-ha-856893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m41s
	  kube-system                 kindnet-h7ntk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-apiserver-ha-856893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-controller-manager-ha-856893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-proxy-52zqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-scheduler-ha-856893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-vip-ha-856893                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m48s (x7 over 6m48s)  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m48s (x8 over 6m48s)  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m48s (x8 over 6m48s)  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m41s                  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s                  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s                  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  NodeReady                5m56s                  kubelet          Node ha-856893 status is now: NodeReady
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	
	
	Name:               ha-856893-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:06:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:09:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 03 Jul 2024 23:08:46 +0000   Wed, 03 Jul 2024 23:10:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-856893-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 109978f2ea4c4f42a5d187826750c850
	  System UUID:                109978f2-ea4c-4f42-a5d1-87826750c850
	  Boot ID:                    994539c8-7107-4cbf-a682-2c196e1b4de5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n7rvj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 etcd-ha-856893-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m22s
	  kube-system                 kindnet-rwqsq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m24s
	  kube-system                 kube-apiserver-ha-856893-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-ha-856893-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-gkwrn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-scheduler-ha-856893-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-vip-ha-856893-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-856893-m02 status is now: NodeNotReady
	
	
	Name:               ha-856893-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:07:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:08:30 +0000   Wed, 03 Jul 2024 23:08:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-856893-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1e4eaaaf3da41a390e7e93c4c9b6dd0
	  System UUID:                a1e4eaaa-f3da-41a3-90e7-e93c4c9b6dd0
	  Boot ID:                    714f8b3c-0219-40be-b96e-5e103d064c96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bt646                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 etcd-ha-856893-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kindnet-vtd2b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m9s
	  kube-system                 kube-apiserver-ha-856893-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ha-856893-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-stq26                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-ha-856893-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-856893-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                 node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m9s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m9s)  kubelet          Node ha-856893-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m9s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal  RegisteredNode           3m50s                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	
	
	Name:               ha-856893-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:12:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:09:35 +0000   Wed, 03 Jul 2024 23:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-856893-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3705f72ac66415f90e310971654b6b5
	  System UUID:                f3705f72-ac66-415f-90e3-10971654b6b5
	  Boot ID:                    b99153db-d083-4d53-8f7d-792d32c1186e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5kksq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m4s
	  kube-system                 kube-proxy-brfsv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal  NodeReady                2m54s                kubelet          Node ha-856893-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 3 23:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050985] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.593398] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.343269] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jul 3 23:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.908066] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.058276] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065122] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.220079] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.126395] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.300940] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.506884] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.061467] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.368826] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.919640] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.254448] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +6.249182] kauditd_printk_skb: 23 callbacks suppressed
	[Jul 3 23:06] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.915119] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb] <==
	{"level":"warn","ts":"2024-07-03T23:12:08.29304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.297418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.303201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.310586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.321943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.332819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.337043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.342101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.357543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.36945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.377255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.378798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.383374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.388876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.39858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.403187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.405566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.412716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.416831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.420668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.427541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.434223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.442486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.503954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:12:08.516066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:12:08 up 7 min,  0 users,  load average: 0.36, 0.23, 0.11
	Linux ha-856893 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a5bd1ae2892ab764deb7e34a3a3c3674097e40cba7bf1814bcd3086fc221f71] <==
	I0703 23:11:32.556674       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:42.565479       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:42.565528       1 main.go:227] handling current node
	I0703 23:11:42.565539       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:42.565544       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:42.565649       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:42.565675       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:42.565718       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:42.565783       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:11:52.579240       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:11:52.579293       1 main.go:227] handling current node
	I0703 23:11:52.579307       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:11:52.579313       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:11:52.579448       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:11:52.579453       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:11:52.579500       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:11:52.579559       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:12:02.597144       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:12:02.597307       1 main.go:227] handling current node
	I0703 23:12:02.597360       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:12:02.597387       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:12:02.597634       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:12:02.597675       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:12:02.597822       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:12:02.597928       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112] <==
	I0703 23:05:27.803513       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:05:27.827801       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0703 23:05:27.842963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:05:40.487913       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0703 23:05:40.891672       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0703 23:06:47.177553       1 trace.go:236] Trace[1646404756]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d9eabe84-be40-4221-b01e-53771880f05a,client:192.168.39.157,api-group:,api-version:v1,name:kube-apiserver-ha-856893-m02,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-856893-m02/status,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:PATCH (03-Jul-2024 23:06:46.675) (total time: 501ms):
	Trace[1646404756]: ["GuaranteedUpdate etcd3" audit-id:d9eabe84-be40-4221-b01e-53771880f05a,key:/pods/kube-system/kube-apiserver-ha-856893-m02,type:*core.Pod,resource:pods 501ms (23:06:46.675)
	Trace[1646404756]:  ---"Txn call completed" 498ms (23:06:47.176)]
	Trace[1646404756]: ---"Object stored in database" 499ms (23:06:47.176)
	Trace[1646404756]: [501.97274ms] [501.97274ms] END
	E0703 23:08:29.714542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55146: use of closed network connection
	E0703 23:08:29.907245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55158: use of closed network connection
	E0703 23:08:30.109154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55168: use of closed network connection
	E0703 23:08:30.308595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55182: use of closed network connection
	E0703 23:08:30.506637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55202: use of closed network connection
	E0703 23:08:30.710449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55214: use of closed network connection
	E0703 23:08:30.897088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55238: use of closed network connection
	E0703 23:08:31.115623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55252: use of closed network connection
	E0703 23:08:31.340432       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55272: use of closed network connection
	E0703 23:08:31.646395       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55278: use of closed network connection
	E0703 23:08:31.818268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43524: use of closed network connection
	E0703 23:08:32.008938       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43550: use of closed network connection
	E0703 23:08:32.189914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43564: use of closed network connection
	E0703 23:08:32.384321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43578: use of closed network connection
	E0703 23:08:32.569307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43590: use of closed network connection
	
	
	==> kube-controller-manager [227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e] <==
	I0703 23:07:59.982610       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:08:00.071837       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m03"
	I0703 23:08:23.943216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.815373ms"
	I0703 23:08:23.984267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.217424ms"
	I0703 23:08:24.186908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.578607ms"
	I0703 23:08:24.331407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.441291ms"
	I0703 23:08:24.387233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.761398ms"
	I0703 23:08:24.387349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.305µs"
	I0703 23:08:24.611572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.619µs"
	I0703 23:08:27.553339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.523436ms"
	I0703 23:08:27.553458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.361µs"
	I0703 23:08:28.204821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.616391ms"
	I0703 23:08:28.204953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.102µs"
	I0703 23:08:28.262137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.537027ms"
	I0703 23:08:28.262534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.007µs"
	I0703 23:08:29.243362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.267489ms"
	I0703 23:08:29.245302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.378µs"
	E0703 23:09:04.446073       1 certificate_controller.go:146] Sync csr-nzk25 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-nzk25": the object has been modified; please apply your changes to the latest version and try again
	I0703 23:09:04.725986       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-856893-m04\" does not exist"
	I0703 23:09:04.781392       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-856893-m04" podCIDRs=["10.244.3.0/24"]
	I0703 23:09:05.083864       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m04"
	I0703 23:09:14.677798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.604690       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	I0703 23:10:08.783473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.605177ms"
	I0703 23:10:08.783650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.542µs"
	
	
	==> kube-proxy [aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599] <==
	I0703 23:05:42.648241       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:05:42.660274       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	I0703 23:05:42.701292       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:05:42.701358       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:05:42.701376       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:05:42.704275       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:05:42.704524       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:05:42.704553       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:05:42.708143       1 config.go:192] "Starting service config controller"
	I0703 23:05:42.708177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:05:42.708224       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:05:42.708246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:05:42.708724       1 config.go:319] "Starting node config controller"
	I0703 23:05:42.708810       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:05:42.808474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:05:42.809810       1 shared_informer.go:320] Caches are synced for node config
	I0703 23:05:42.809889       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0] <==
	W0703 23:05:24.434535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:05:24.434550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:05:25.261863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.261999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.269112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:05:25.269265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:05:25.278628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:05:25.279108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:05:25.396201       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:05:25.396448       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:05:25.396683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:05:25.396721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:05:25.414377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:05:25.414670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:05:25.429406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.429583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.523495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 23:05:25.523643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:05:25.721665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 23:05:25.721726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0703 23:05:27.616231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:08:23.941598       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	E0703 23:08:23.941843       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4ffbc91d-86d2-4096-8592-d570ee95c514(default/busybox-fc5497c4f-bt646) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-bt646"
	E0703 23:08:23.941901       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bt646\": pod busybox-fc5497c4f-bt646 is already assigned to node \"ha-856893-m03\"" pod="default/busybox-fc5497c4f-bt646"
	I0703 23:08:23.941955       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bt646" node="ha-856893-m03"
	
	
	==> kubelet <==
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:07:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:07:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.915134    1363 topology_manager.go:215] "Topology Admit Handler" podUID="1e907d89-dcf0-4e2d-bf2d-812d38932e86" podNamespace="default" podName="busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:23 ha-856893 kubelet[1363]: I0703 23:08:23.944135    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7w4\" (UniqueName: \"kubernetes.io/projected/1e907d89-dcf0-4e2d-bf2d-812d38932e86-kube-api-access-5b7w4\") pod \"busybox-fc5497c4f-hh5rx\" (UID: \"1e907d89-dcf0-4e2d-bf2d-812d38932e86\") " pod="default/busybox-fc5497c4f-hh5rx"
	Jul 03 23:08:27 ha-856893 kubelet[1363]: E0703 23:08:27.752219    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:08:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:08:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:09:27 ha-856893 kubelet[1363]: E0703 23:09:27.751305    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:09:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:09:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:10:27 ha-856893 kubelet[1363]: E0703 23:10:27.755235    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:10:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:10:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:11:27 ha-856893 kubelet[1363]: E0703 23:11:27.756589    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:11:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:11:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-856893 -n ha-856893
helpers_test.go:261: (dbg) Run:  kubectl --context ha-856893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-856893 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-856893 -v=7 --alsologtostderr
E0703 23:13:57.358009   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-856893 -v=7 --alsologtostderr: exit status 82 (2m1.908926038s)

                                                
                                                
-- stdout --
	* Stopping node "ha-856893-m04"  ...
	* Stopping node "ha-856893-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:12:13.594857   32447 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:12:13.595101   32447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:12:13.595110   32447 out.go:304] Setting ErrFile to fd 2...
	I0703 23:12:13.595115   32447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:12:13.595331   32447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:12:13.595619   32447 out.go:298] Setting JSON to false
	I0703 23:12:13.595752   32447 mustload.go:65] Loading cluster: ha-856893
	I0703 23:12:13.596195   32447 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:12:13.596292   32447 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:12:13.596477   32447 mustload.go:65] Loading cluster: ha-856893
	I0703 23:12:13.596606   32447 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:12:13.596631   32447 stop.go:39] StopHost: ha-856893-m04
	I0703 23:12:13.597052   32447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:12:13.597089   32447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:12:13.611468   32447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0703 23:12:13.611961   32447 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:12:13.612626   32447 main.go:141] libmachine: Using API Version  1
	I0703 23:12:13.612659   32447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:12:13.613005   32447 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:12:13.615810   32447 out.go:177] * Stopping node "ha-856893-m04"  ...
	I0703 23:12:13.617070   32447 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0703 23:12:13.617122   32447 main.go:141] libmachine: (ha-856893-m04) Calling .DriverName
	I0703 23:12:13.617394   32447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0703 23:12:13.617426   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHHostname
	I0703 23:12:13.620513   32447 main.go:141] libmachine: (ha-856893-m04) DBG | domain ha-856893-m04 has defined MAC address 52:54:00:e6:5d:92 in network mk-ha-856893
	I0703 23:12:13.620959   32447 main.go:141] libmachine: (ha-856893-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:5d:92", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:08:47 +0000 UTC Type:0 Mac:52:54:00:e6:5d:92 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-856893-m04 Clientid:01:52:54:00:e6:5d:92}
	I0703 23:12:13.620998   32447 main.go:141] libmachine: (ha-856893-m04) DBG | domain ha-856893-m04 has defined IP address 192.168.39.195 and MAC address 52:54:00:e6:5d:92 in network mk-ha-856893
	I0703 23:12:13.621176   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHPort
	I0703 23:12:13.621371   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHKeyPath
	I0703 23:12:13.621546   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHUsername
	I0703 23:12:13.621689   32447 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m04/id_rsa Username:docker}
	I0703 23:12:13.714462   32447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0703 23:12:13.770734   32447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0703 23:12:13.825796   32447 main.go:141] libmachine: Stopping "ha-856893-m04"...
	I0703 23:12:13.825825   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetState
	I0703 23:12:13.827446   32447 main.go:141] libmachine: (ha-856893-m04) Calling .Stop
	I0703 23:12:13.831047   32447 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 0/120
	I0703 23:12:15.024981   32447 main.go:141] libmachine: (ha-856893-m04) Calling .GetState
	I0703 23:12:15.026267   32447 main.go:141] libmachine: Machine "ha-856893-m04" was stopped.
	I0703 23:12:15.026285   32447 stop.go:75] duration metric: took 1.409221243s to stop
	I0703 23:12:15.026318   32447 stop.go:39] StopHost: ha-856893-m03
	I0703 23:12:15.026605   32447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:12:15.026642   32447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:12:15.041448   32447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0703 23:12:15.041907   32447 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:12:15.042550   32447 main.go:141] libmachine: Using API Version  1
	I0703 23:12:15.042577   32447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:12:15.042881   32447 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:12:15.044900   32447 out.go:177] * Stopping node "ha-856893-m03"  ...
	I0703 23:12:15.046238   32447 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0703 23:12:15.046277   32447 main.go:141] libmachine: (ha-856893-m03) Calling .DriverName
	I0703 23:12:15.046537   32447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0703 23:12:15.046565   32447 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHHostname
	I0703 23:12:15.049777   32447 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:12:15.050264   32447 main.go:141] libmachine: (ha-856893-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:e8:37", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:07:21 +0000 UTC Type:0 Mac:52:54:00:cb:e8:37 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-856893-m03 Clientid:01:52:54:00:cb:e8:37}
	I0703 23:12:15.050293   32447 main.go:141] libmachine: (ha-856893-m03) DBG | domain ha-856893-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:cb:e8:37 in network mk-ha-856893
	I0703 23:12:15.050453   32447 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHPort
	I0703 23:12:15.050689   32447 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHKeyPath
	I0703 23:12:15.050905   32447 main.go:141] libmachine: (ha-856893-m03) Calling .GetSSHUsername
	I0703 23:12:15.051064   32447 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m03/id_rsa Username:docker}
	I0703 23:12:15.142078   32447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0703 23:12:15.198166   32447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0703 23:12:15.254072   32447 main.go:141] libmachine: Stopping "ha-856893-m03"...
	I0703 23:12:15.254094   32447 main.go:141] libmachine: (ha-856893-m03) Calling .GetState
	I0703 23:12:15.255707   32447 main.go:141] libmachine: (ha-856893-m03) Calling .Stop
	I0703 23:12:15.259364   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 0/120
	I0703 23:12:16.261666   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 1/120
	I0703 23:12:17.263035   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 2/120
	I0703 23:12:18.264306   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 3/120
	I0703 23:12:19.266417   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 4/120
	I0703 23:12:20.268764   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 5/120
	I0703 23:12:21.271307   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 6/120
	I0703 23:12:22.272692   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 7/120
	I0703 23:12:23.274953   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 8/120
	I0703 23:12:24.276710   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 9/120
	I0703 23:12:25.278957   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 10/120
	I0703 23:12:26.280659   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 11/120
	I0703 23:12:27.282067   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 12/120
	I0703 23:12:28.283653   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 13/120
	I0703 23:12:29.285067   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 14/120
	I0703 23:12:30.287141   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 15/120
	I0703 23:12:31.288645   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 16/120
	I0703 23:12:32.290141   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 17/120
	I0703 23:12:33.291744   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 18/120
	I0703 23:12:34.293318   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 19/120
	I0703 23:12:35.295392   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 20/120
	I0703 23:12:36.297029   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 21/120
	I0703 23:12:37.298397   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 22/120
	I0703 23:12:38.300252   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 23/120
	I0703 23:12:39.301633   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 24/120
	I0703 23:12:40.303719   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 25/120
	I0703 23:12:41.305155   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 26/120
	I0703 23:12:42.306722   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 27/120
	I0703 23:12:43.308854   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 28/120
	I0703 23:12:44.310332   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 29/120
	I0703 23:12:45.312033   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 30/120
	I0703 23:12:46.313734   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 31/120
	I0703 23:12:47.315137   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 32/120
	I0703 23:12:48.316665   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 33/120
	I0703 23:12:49.318074   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 34/120
	I0703 23:12:50.320023   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 35/120
	I0703 23:12:51.321391   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 36/120
	I0703 23:12:52.322647   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 37/120
	I0703 23:12:53.323958   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 38/120
	I0703 23:12:54.325271   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 39/120
	I0703 23:12:55.327607   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 40/120
	I0703 23:12:56.329141   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 41/120
	I0703 23:12:57.330500   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 42/120
	I0703 23:12:58.332251   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 43/120
	I0703 23:12:59.334377   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 44/120
	I0703 23:13:00.335851   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 45/120
	I0703 23:13:01.337330   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 46/120
	I0703 23:13:02.338589   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 47/120
	I0703 23:13:03.339905   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 48/120
	I0703 23:13:04.341182   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 49/120
	I0703 23:13:05.343214   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 50/120
	I0703 23:13:06.344557   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 51/120
	I0703 23:13:07.345998   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 52/120
	I0703 23:13:08.347239   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 53/120
	I0703 23:13:09.348626   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 54/120
	I0703 23:13:10.350350   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 55/120
	I0703 23:13:11.352522   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 56/120
	I0703 23:13:12.353774   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 57/120
	I0703 23:13:13.355163   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 58/120
	I0703 23:13:14.356435   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 59/120
	I0703 23:13:15.357991   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 60/120
	I0703 23:13:16.359462   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 61/120
	I0703 23:13:17.360663   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 62/120
	I0703 23:13:18.362324   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 63/120
	I0703 23:13:19.363680   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 64/120
	I0703 23:13:20.364972   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 65/120
	I0703 23:13:21.366238   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 66/120
	I0703 23:13:22.367579   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 67/120
	I0703 23:13:23.368890   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 68/120
	I0703 23:13:24.370097   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 69/120
	I0703 23:13:25.371735   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 70/120
	I0703 23:13:26.373151   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 71/120
	I0703 23:13:27.374875   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 72/120
	I0703 23:13:28.376295   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 73/120
	I0703 23:13:29.377727   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 74/120
	I0703 23:13:30.379518   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 75/120
	I0703 23:13:31.380996   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 76/120
	I0703 23:13:32.382365   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 77/120
	I0703 23:13:33.383628   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 78/120
	I0703 23:13:34.385738   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 79/120
	I0703 23:13:35.387422   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 80/120
	I0703 23:13:36.388869   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 81/120
	I0703 23:13:37.390335   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 82/120
	I0703 23:13:38.391900   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 83/120
	I0703 23:13:39.393279   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 84/120
	I0703 23:13:40.394856   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 85/120
	I0703 23:13:41.396197   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 86/120
	I0703 23:13:42.397531   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 87/120
	I0703 23:13:43.398846   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 88/120
	I0703 23:13:44.401080   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 89/120
	I0703 23:13:45.403054   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 90/120
	I0703 23:13:46.404297   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 91/120
	I0703 23:13:47.405731   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 92/120
	I0703 23:13:48.406968   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 93/120
	I0703 23:13:49.408713   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 94/120
	I0703 23:13:50.410320   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 95/120
	I0703 23:13:51.411636   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 96/120
	I0703 23:13:52.413090   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 97/120
	I0703 23:13:53.415588   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 98/120
	I0703 23:13:54.416976   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 99/120
	I0703 23:13:55.418884   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 100/120
	I0703 23:13:56.420530   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 101/120
	I0703 23:13:57.422338   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 102/120
	I0703 23:13:58.423933   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 103/120
	I0703 23:13:59.425217   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 104/120
	I0703 23:14:00.427356   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 105/120
	I0703 23:14:01.430333   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 106/120
	I0703 23:14:02.431968   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 107/120
	I0703 23:14:03.433367   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 108/120
	I0703 23:14:04.434815   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 109/120
	I0703 23:14:05.436293   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 110/120
	I0703 23:14:06.437839   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 111/120
	I0703 23:14:07.439138   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 112/120
	I0703 23:14:08.440585   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 113/120
	I0703 23:14:09.442549   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 114/120
	I0703 23:14:10.444058   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 115/120
	I0703 23:14:11.446359   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 116/120
	I0703 23:14:12.448711   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 117/120
	I0703 23:14:13.449932   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 118/120
	I0703 23:14:14.451341   32447 main.go:141] libmachine: (ha-856893-m03) Waiting for machine to stop 119/120
	I0703 23:14:15.452095   32447 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0703 23:14:15.452152   32447 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0703 23:14:15.454172   32447 out.go:177] 
	W0703 23:14:15.455718   32447 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0703 23:14:15.455729   32447 out.go:239] * 
	* 
	W0703 23:14:15.457910   32447 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0703 23:14:15.459396   32447 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-856893 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-856893 --wait=true -v=7 --alsologtostderr
E0703 23:14:25.047085   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:16:17.046682   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:17:40.098266   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-856893 --wait=true -v=7 --alsologtostderr: (4m10.897036738s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-856893
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-856893 -n ha-856893
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 logs -n 25: (2.012273791s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m04 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp testdata/cp-test.txt                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m03 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-856893 node stop m02 -v=7                                                    | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-856893 node start m02 -v=7                                                   | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-856893 -v=7                                                          | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-856893 -v=7                                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-856893 --wait=true -v=7                                                   | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:14 UTC | 03 Jul 24 23:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-856893                                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:18 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:14:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:14:15.505195   32963 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:14:15.505421   32963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:15.505434   32963 out.go:304] Setting ErrFile to fd 2...
	I0703 23:14:15.505438   32963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:15.505622   32963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:14:15.506160   32963 out.go:298] Setting JSON to false
	I0703 23:14:15.507032   32963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3395,"bootTime":1720045060,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:14:15.507088   32963 start.go:139] virtualization: kvm guest
	I0703 23:14:15.509480   32963 out.go:177] * [ha-856893] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:14:15.510960   32963 notify.go:220] Checking for updates...
	I0703 23:14:15.510984   32963 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:14:15.512418   32963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:14:15.513803   32963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:14:15.515044   32963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:14:15.516254   32963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:14:15.517426   32963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:14:15.518892   32963 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:14:15.518982   32963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:14:15.519370   32963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:14:15.519418   32963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:14:15.534605   32963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0703 23:14:15.535077   32963 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:14:15.535690   32963 main.go:141] libmachine: Using API Version  1
	I0703 23:14:15.535712   32963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:14:15.536160   32963 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:14:15.536353   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.571249   32963 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:14:15.572635   32963 start.go:297] selected driver: kvm2
	I0703 23:14:15.572648   32963 start.go:901] validating driver "kvm2" against &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:14:15.572787   32963 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:14:15.573100   32963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:14:15.573166   32963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:14:15.589250   32963 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:14:15.589927   32963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:14:15.589988   32963 cni.go:84] Creating CNI manager for ""
	I0703 23:14:15.589999   32963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0703 23:14:15.590055   32963 start.go:340] cluster config:
	{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:14:15.590162   32963 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:14:15.591898   32963 out.go:177] * Starting "ha-856893" primary control-plane node in "ha-856893" cluster
	I0703 23:14:15.593080   32963 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:14:15.593114   32963 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:14:15.593120   32963 cache.go:56] Caching tarball of preloaded images
	I0703 23:14:15.593190   32963 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:14:15.593200   32963 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:14:15.593322   32963 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:14:15.593515   32963 start.go:360] acquireMachinesLock for ha-856893: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:14:15.593553   32963 start.go:364] duration metric: took 22.508µs to acquireMachinesLock for "ha-856893"
	I0703 23:14:15.593566   32963 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:14:15.593579   32963 fix.go:54] fixHost starting: 
	I0703 23:14:15.593852   32963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:14:15.593879   32963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:14:15.608280   32963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I0703 23:14:15.608707   32963 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:14:15.609207   32963 main.go:141] libmachine: Using API Version  1
	I0703 23:14:15.609225   32963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:14:15.609519   32963 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:14:15.609676   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.609831   32963 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:14:15.611553   32963 fix.go:112] recreateIfNeeded on ha-856893: state=Running err=<nil>
	W0703 23:14:15.611576   32963 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:14:15.614314   32963 out.go:177] * Updating the running kvm2 "ha-856893" VM ...
	I0703 23:14:15.615766   32963 machine.go:94] provisionDockerMachine start ...
	I0703 23:14:15.615791   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.616099   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.618632   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.619043   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.619076   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.619251   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.619449   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.619627   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.619809   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.619994   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.620209   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.620225   32963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:14:15.733758   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:14:15.733784   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.734043   32963 buildroot.go:166] provisioning hostname "ha-856893"
	I0703 23:14:15.734067   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.734233   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.736657   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.736959   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.736984   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.737089   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.737252   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.737428   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.737582   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.737727   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.737883   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.737894   32963 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893 && echo "ha-856893" | sudo tee /etc/hostname
	I0703 23:14:15.864753   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:14:15.864783   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.867338   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.867809   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.867842   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.868001   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.868182   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.868354   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.868514   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.868666   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.868836   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.868858   32963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:14:15.977662   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:14:15.977689   32963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:14:15.977720   32963 buildroot.go:174] setting up certificates
	I0703 23:14:15.977730   32963 provision.go:84] configureAuth start
	I0703 23:14:15.977737   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.977994   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:14:15.980883   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.981226   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.981255   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.981411   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.983677   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.984003   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.984037   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.984185   32963 provision.go:143] copyHostCerts
	I0703 23:14:15.984220   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:14:15.984281   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:14:15.984297   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:14:15.984395   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:14:15.984483   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:14:15.984508   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:14:15.984514   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:14:15.984550   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:14:15.984609   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:14:15.984631   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:14:15.984639   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:14:15.984676   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:14:15.984740   32963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893 san=[127.0.0.1 192.168.39.172 ha-856893 localhost minikube]
	I0703 23:14:16.058974   32963 provision.go:177] copyRemoteCerts
	I0703 23:14:16.059025   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:14:16.059044   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:16.061936   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.062334   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:16.062362   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.062538   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:16.062759   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.062972   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:16.063133   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:14:16.147890   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:14:16.147964   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:14:16.176035   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:14:16.176091   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0703 23:14:16.204703   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:14:16.204759   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:14:16.237017   32963 provision.go:87] duration metric: took 259.276547ms to configureAuth
	I0703 23:14:16.237049   32963 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:14:16.237323   32963 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:14:16.237402   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:16.240332   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.240759   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:16.240780   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.240978   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:16.241153   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.241301   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.241425   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:16.241588   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:16.241752   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:16.241776   32963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:15:47.221094   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:15:47.221121   32963 machine.go:97] duration metric: took 1m31.605338399s to provisionDockerMachine
	I0703 23:15:47.221135   32963 start.go:293] postStartSetup for "ha-856893" (driver="kvm2")
	I0703 23:15:47.221147   32963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:15:47.221166   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.221445   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:15:47.221468   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.224528   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.224968   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.224997   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.225143   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.225327   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.225455   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.225594   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.312057   32963 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:15:47.316344   32963 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:15:47.316371   32963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:15:47.316443   32963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:15:47.316521   32963 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:15:47.316532   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:15:47.316628   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:15:47.328708   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:15:47.355915   32963 start.go:296] duration metric: took 134.766345ms for postStartSetup
	I0703 23:15:47.355954   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.356294   32963 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0703 23:15:47.356342   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.358969   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.359525   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.359563   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.359717   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.359919   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.360109   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.360262   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	W0703 23:15:47.443227   32963 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0703 23:15:47.443260   32963 fix.go:56] duration metric: took 1m31.849677841s for fixHost
	I0703 23:15:47.443284   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.446292   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.446743   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.446772   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.446939   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.447152   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.447322   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.447463   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.447620   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:15:47.447817   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:15:47.447830   32963 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:15:47.556879   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720048547.528498017
	
	I0703 23:15:47.556904   32963 fix.go:216] guest clock: 1720048547.528498017
	I0703 23:15:47.556910   32963 fix.go:229] Guest: 2024-07-03 23:15:47.528498017 +0000 UTC Remote: 2024-07-03 23:15:47.443267292 +0000 UTC m=+91.974175532 (delta=85.230725ms)
	I0703 23:15:47.556939   32963 fix.go:200] guest clock delta is within tolerance: 85.230725ms
	I0703 23:15:47.556944   32963 start.go:83] releasing machines lock for "ha-856893", held for 1m31.963382331s
	I0703 23:15:47.556970   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.557225   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:15:47.559537   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.559898   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.559930   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.560044   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560578   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560755   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560847   32963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:15:47.560887   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.561030   32963 ssh_runner.go:195] Run: cat /version.json
	I0703 23:15:47.561053   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.563290   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563566   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.563588   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563802   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.563817   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563986   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.564128   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.564137   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.564162   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.564267   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.564325   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.564460   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.564606   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.564737   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.673980   32963 ssh_runner.go:195] Run: systemctl --version
	I0703 23:15:47.681930   32963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:15:47.848558   32963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:15:47.856647   32963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:15:47.856715   32963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:15:47.867396   32963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 23:15:47.867424   32963 start.go:494] detecting cgroup driver to use...
	I0703 23:15:47.867491   32963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:15:47.886645   32963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:15:47.901863   32963 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:15:47.901937   32963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:15:47.917569   32963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:15:47.932869   32963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:15:48.094073   32963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:15:48.245795   32963 docker.go:233] disabling docker service ...
	I0703 23:15:48.245857   32963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:15:48.265791   32963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:15:48.281242   32963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:15:48.443062   32963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:15:48.603249   32963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:15:48.621318   32963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:15:48.641720   32963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:15:48.641783   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.653742   32963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:15:48.653810   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.665440   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.677089   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.689170   32963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:15:48.701318   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.713020   32963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.725564   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.737500   32963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:15:48.748546   32963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:15:48.759493   32963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:15:48.907542   32963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:15:49.193834   32963 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:15:49.193905   32963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:15:49.200062   32963 start.go:562] Will wait 60s for crictl version
	I0703 23:15:49.200122   32963 ssh_runner.go:195] Run: which crictl
	I0703 23:15:49.207527   32963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:15:49.251943   32963 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:15:49.252024   32963 ssh_runner.go:195] Run: crio --version
	I0703 23:15:49.285970   32963 ssh_runner.go:195] Run: crio --version
	I0703 23:15:49.318481   32963 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:15:49.320058   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:15:49.322667   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:49.322996   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:49.323020   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:49.323260   32963 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:15:49.328649   32963 kubeadm.go:877] updating cluster {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:15:49.328834   32963 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:15:49.328901   32963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:15:49.375491   32963 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:15:49.375518   32963 crio.go:433] Images already preloaded, skipping extraction
	I0703 23:15:49.375572   32963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:15:49.417464   32963 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:15:49.417488   32963 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:15:49.417499   32963 kubeadm.go:928] updating node { 192.168.39.172 8443 v1.30.2 crio true true} ...
	I0703 23:15:49.417623   32963 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:15:49.417715   32963 ssh_runner.go:195] Run: crio config
	I0703 23:15:49.468201   32963 cni.go:84] Creating CNI manager for ""
	I0703 23:15:49.468228   32963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0703 23:15:49.468239   32963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:15:49.468271   32963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-856893 NodeName:ha-856893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:15:49.468440   32963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-856893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:15:49.468460   32963 kube-vip.go:115] generating kube-vip config ...
	I0703 23:15:49.468520   32963 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:15:49.480861   32963 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:15:49.480988   32963 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:15:49.481057   32963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:15:49.491415   32963 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:15:49.491482   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0703 23:15:49.501642   32963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0703 23:15:49.519059   32963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:15:49.537068   32963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0703 23:15:49.554569   32963 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:15:49.571468   32963 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:15:49.576348   32963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:15:49.729118   32963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:15:49.744081   32963 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.172
	I0703 23:15:49.744100   32963 certs.go:194] generating shared ca certs ...
	I0703 23:15:49.744139   32963 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.744296   32963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:15:49.744349   32963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:15:49.744359   32963 certs.go:256] generating profile certs ...
	I0703 23:15:49.744470   32963 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:15:49.744513   32963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344
	I0703 23:15:49.744532   32963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.186 192.168.39.254]
	I0703 23:15:49.956081   32963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 ...
	I0703 23:15:49.956111   32963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344: {Name:mk41a659d1acf59169903bff6a6d6448b514fd9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.956319   32963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344 ...
	I0703 23:15:49.956334   32963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344: {Name:mk177201bb8bead0456f9a899371f0d4d70690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.956428   32963 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:15:49.956598   32963 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:15:49.956768   32963 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:15:49.956787   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:15:49.956805   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:15:49.956823   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:15:49.956840   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:15:49.956856   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:15:49.956870   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:15:49.956888   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:15:49.956905   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:15:49.956969   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:15:49.957008   32963 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:15:49.957022   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:15:49.957058   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:15:49.957088   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:15:49.957120   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:15:49.957174   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:15:49.957208   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:49.957226   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:15:49.957244   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:15:49.957820   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:15:49.985686   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:15:50.011951   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:15:50.043394   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:15:50.106401   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0703 23:15:50.171093   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:15:50.210424   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:15:50.265483   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:15:50.295605   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:15:50.349553   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:15:50.402425   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:15:50.445013   32963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:15:50.473358   32963 ssh_runner.go:195] Run: openssl version
	I0703 23:15:50.486131   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:15:50.513267   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.524170   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.524235   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.532553   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:15:50.543275   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:15:50.555917   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.560719   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.560765   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.566633   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:15:50.576894   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:15:50.598154   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.602803   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.602844   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.608467   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:15:50.618539   32963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:15:50.623271   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 23:15:50.629535   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 23:15:50.635627   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 23:15:50.641352   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 23:15:50.647135   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 23:15:50.653249   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 23:15:50.659566   32963 kubeadm.go:391] StartCluster: {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:15:50.659678   32963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:15:50.659746   32963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:15:50.706641   32963 cri.go:89] found id: "173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	I0703 23:15:50.706660   32963 cri.go:89] found id: "072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc"
	I0703 23:15:50.706664   32963 cri.go:89] found id: "9fb7cca0e0f0d80a9a145b4cc7d5e4e90af46d651bc0725c6186be8ec737120f"
	I0703 23:15:50.706667   32963 cri.go:89] found id: "a67095c5f0151deec8b4babb63aa353888c6c2f268e462ea236de00624bce508"
	I0703 23:15:50.706670   32963 cri.go:89] found id: "21272d5241be2eae198709be303566744f455806f0ebffba408cf58e6707cefd"
	I0703 23:15:50.706673   32963 cri.go:89] found id: "856a70d4253722d9b95f44209d4c629ef26d3e9c2b15bd4b1b4543050f9d1cf0"
	I0703 23:15:50.706675   32963 cri.go:89] found id: "f4741a1d62d7eb84ae4d4c1ce086bf13f1d54396ebf9a27932c6e784027fc371"
	I0703 23:15:50.706678   32963 cri.go:89] found id: "4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54"
	I0703 23:15:50.706680   32963 cri.go:89] found id: "ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691"
	I0703 23:15:50.706685   32963 cri.go:89] found id: "aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599"
	I0703 23:15:50.706687   32963 cri.go:89] found id: "4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e"
	I0703 23:15:50.706689   32963 cri.go:89] found id: "227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e"
	I0703 23:15:50.706692   32963 cri.go:89] found id: "8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0"
	I0703 23:15:50.706695   32963 cri.go:89] found id: "4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112"
	I0703 23:15:50.706700   32963 cri.go:89] found id: "194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb"
	I0703 23:15:50.706704   32963 cri.go:89] found id: ""
	I0703 23:15:50.706751   32963 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.144951547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048707144547336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35cd770d-40f2-4043-9b61-51d00bbeb339 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.146024357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e76967d3-146d-41d3-aecd-c5a5d026945c name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.146115322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e76967d3-146d-41d3-aecd-c5a5d026945c name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.147004511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e76967d3-146d-41d3-aecd-c5a5d026945c name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.206548274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4eb4bfba-62ee-474d-80ce-61908d010efd name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.206651855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4eb4bfba-62ee-474d-80ce-61908d010efd name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.208189382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc3b97d5-1880-4e2d-87c9-4ed79576bfc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.208852554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048707208820422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc3b97d5-1880-4e2d-87c9-4ed79576bfc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.209481359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e466b35-d754-4d8b-aa7b-e5689921537b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.209561578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e466b35-d754-4d8b-aa7b-e5689921537b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.210175001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e466b35-d754-4d8b-aa7b-e5689921537b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.258825534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=322f9ee7-5b0d-4808-accc-58399b7c4725 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.258929150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=322f9ee7-5b0d-4808-accc-58399b7c4725 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.260961769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a225f73-ce0a-46b5-85e9-5dd465854972 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.261430481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048707261405343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a225f73-ce0a-46b5-85e9-5dd465854972 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.262246389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a084ef0e-84e5-4f04-8eaf-98a341ee613b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.262323170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a084ef0e-84e5-4f04-8eaf-98a341ee613b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.263067257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a084ef0e-84e5-4f04-8eaf-98a341ee613b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.313007683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a3cf507-f0eb-46bf-8c61-18e087e50b49 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.313087638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a3cf507-f0eb-46bf-8c61-18e087e50b49 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.314542791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17972bc2-4eb2-42f1-8118-f4e2e4b44307 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.315166376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048707315139241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17972bc2-4eb2-42f1-8118-f4e2e4b44307 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.316239025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a33670a-8f47-4b58-ba07-079d1b55faaf name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.316300554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a33670a-8f47-4b58-ba07-079d1b55faaf name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:18:27 ha-856893 crio[3898]: time="2024-07-03 23:18:27.316885070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a33670a-8f47-4b58-ba07-079d1b55faaf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1be8f74847b6c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   a0b2a60d87f5b       kindnet-h7ntk
	627192b28b0d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   05ac5d24180a3       storage-provisioner
	2d4d662ed3e9a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   2                   0e28d529014bf       kube-controller-manager-ha-856893
	87914b1cd6875       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            3                   4c44635e56bb3       kube-apiserver-ha-856893
	0abb9e5fc1177       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   3cea8a5014a03       busybox-fc5497c4f-hh5rx
	7d081acbf6ada       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   0d1c5fedc17c7       kube-vip-ha-856893
	992e4d3007ac0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   41bc64df47de6       kube-proxy-52zqj
	e4fc2ed85817c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   05ac5d24180a3       storage-provisioner
	8f7a62bdd1c0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   620e31e027610       coredns-7db6d8ff4d-pwqfl
	9da7c56e33b64       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   0e28d529014bf       kube-controller-manager-ha-856893
	747282699f82e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   f4d9c612a69e5       kube-scheduler-ha-856893
	96edf619f58bb       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   4c44635e56bb3       kube-apiserver-ha-856893
	6565008c19a06       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   6192fa5bbb48f       etcd-ha-856893
	173dd7f93a702       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   a0b2a60d87f5b       kindnet-h7ntk
	072278e64e9ff       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   f346b1aac5151       coredns-7db6d8ff4d-n5tdf
	2d5f2f09a864e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   2add57c6feb6d       busybox-fc5497c4f-hh5rx
	4b327b3ea68a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   52adb03e9908b       coredns-7db6d8ff4d-n5tdf
	ebac8426f222e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   75824b8079291       coredns-7db6d8ff4d-pwqfl
	aea86e5699e84       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      12 minutes ago       Exited              kube-proxy                0                   17315e93de095       kube-proxy-52zqj
	8ed8443e8784d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago       Exited              kube-scheduler            0                   a50d015125505       kube-scheduler-ha-856893
	194253df10dfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   bbcc0c1ac6390       etcd-ha-856893
	
	
	==> coredns [072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1108679960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:05.034) (total time: 10001ms):
	Trace[1108679960]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (23:16:15.035)
	Trace[1108679960]: [10.001078406s] [10.001078406s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1359473294]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:05.230) (total time: 10001ms):
	Trace[1359473294]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:16:15.231)
	Trace[1359473294]: [10.001638601s] [10.001638601s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51412->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51412->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54] <==
	[INFO] 10.244.1.2:43589 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174137s
	[INFO] 10.244.1.2:49376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106729s
	[INFO] 10.244.1.2:51691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271033s
	[INFO] 10.244.2.2:40310 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117383s
	[INFO] 10.244.2.2:38408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011442s
	[INFO] 10.244.2.2:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080741s
	[INFO] 10.244.0.4:60751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020875s
	[INFO] 10.244.0.4:42746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083559s
	[INFO] 10.244.1.2:46618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026488s
	[INFO] 10.244.1.2:46816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095128s
	[INFO] 10.244.2.2:35755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141347s
	[INFO] 10.244.2.2:37226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000441904s
	[INFO] 10.244.2.2:56990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123934s
	[INFO] 10.244.0.4:33260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228783s
	[INFO] 10.244.0.4:40825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089557s
	[INFO] 10.244.0.4:36029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284159s
	[INFO] 10.244.0.4:38025 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069908s
	[INFO] 10.244.1.2:33505 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000516657s
	[INFO] 10.244.1.2:51760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106766s
	[INFO] 10.244.1.2:48924 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111713s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8f7a62bdd1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44] <==
	Trace[893896800]: [10.001675331s] [10.001675331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42006->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42006->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[832467613]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:06.594) (total time: 11584ms):
	Trace[832467613]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer 11584ms (23:16:18.178)
	Trace[832467613]: [11.5841334s] [11.5841334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42000->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42000->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691] <==
	[INFO] 10.244.1.2:38357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235864s
	[INFO] 10.244.1.2:52654 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000207162s
	[INFO] 10.244.2.2:38149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003994489s
	[INFO] 10.244.2.2:37323 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162805s
	[INFO] 10.244.2.2:37370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170597s
	[INFO] 10.244.0.4:39154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140397s
	[INFO] 10.244.0.4:39807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002148429s
	[INFO] 10.244.0.4:52421 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189952s
	[INFO] 10.244.0.4:32927 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001716905s
	[INFO] 10.244.0.4:37077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064503s
	[INFO] 10.244.1.2:53622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138056s
	[INFO] 10.244.1.2:56863 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001413025s
	[INFO] 10.244.1.2:33669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000289179s
	[INFO] 10.244.2.2:46390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141967s
	[INFO] 10.244.0.4:47937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126136s
	[INFO] 10.244.0.4:40258 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058689s
	[INFO] 10.244.1.2:34579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112137s
	[INFO] 10.244.1.2:43318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087441s
	[INFO] 10.244.2.2:44839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154015s
	[INFO] 10.244.1.2:49628 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158345s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-856893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-856893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26831b612bd459ca285f71afd0636da
	  System UUID:                a26831b6-12bd-459c-a285-f71afd0636da
	  Boot ID:                    60d1e076-9358-4d45-bf73-662df78ab1a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hh5rx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-n5tdf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-pwqfl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-856893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-h7ntk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-856893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-856893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-52zqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-856893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-856893                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 110s               kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   NodeReady                12m                kubelet          Node ha-856893 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Warning  ContainerGCFailed        3m (x2 over 4m)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           102s               node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           32s                node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	
	
	Name:               ha-856893-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:06:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:17:22 +0000   Wed, 03 Jul 2024 23:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:17:22 +0000   Wed, 03 Jul 2024 23:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:17:22 +0000   Wed, 03 Jul 2024 23:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:17:22 +0000   Wed, 03 Jul 2024 23:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-856893-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 109978f2ea4c4f42a5d187826750c850
	  System UUID:                109978f2-ea4c-4f42-a5d1-87826750c850
	  Boot ID:                    5864d854-b931-4a7e-9c19-740d9ee37c4c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n7rvj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-856893-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-rwqsq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-856893-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-856893-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-gkwrn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-856893-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-856893-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 105s                   kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             8m19s                  node-controller  Node ha-856893-m02 status is now: NodeNotReady
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           102s                   node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           100s                   node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	
	
	Name:               ha-856893-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_08_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:07:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:18:00 +0000   Wed, 03 Jul 2024 23:17:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:18:00 +0000   Wed, 03 Jul 2024 23:17:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:18:00 +0000   Wed, 03 Jul 2024 23:17:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:18:00 +0000   Wed, 03 Jul 2024 23:17:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-856893-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1e4eaaaf3da41a390e7e93c4c9b6dd0
	  System UUID:                a1e4eaaa-f3da-41a3-90e7-e93c4c9b6dd0
	  Boot ID:                    92eaf466-b6b7-4eec-bcfe-5a3692c994d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bt646                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-856893-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-vtd2b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-856893-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-856893-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-stq26                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-856893-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-856893-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 34s                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-856893-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	  Normal   NodeNotReady             62s                node-controller  Node ha-856893-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 58s (x2 over 58s)  kubelet          Node ha-856893-m03 has been rebooted, boot id: 92eaf466-b6b7-4eec-bcfe-5a3692c994d1
	  Normal   NodeHasSufficientMemory  58s (x3 over 58s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x3 over 58s)  kubelet          Node ha-856893-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x3 over 58s)  kubelet          Node ha-856893-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             58s                kubelet          Node ha-856893-m03 status is now: NodeNotReady
	  Normal   NodeReady                58s                kubelet          Node ha-856893-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-856893-m03 event: Registered Node ha-856893-m03 in Controller
	
	
	Name:               ha-856893-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:18:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:18:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-856893-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3705f72ac66415f90e310971654b6b5
	  System UUID:                f3705f72-ac66-415f-90e3-10971654b6b5
	  Boot ID:                    2ac337d1-652f-40d7-872b-674efdefff16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5kksq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m24s
	  kube-system                 kube-proxy-brfsv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 9m18s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m24s (x2 over 9m24s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x2 over 9m24s)  kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m24s (x2 over 9m24s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m23s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           9m20s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   NodeReady                9m14s                  kubelet          Node ha-856893-m04 status is now: NodeReady
	  Normal   RegisteredNode           103s                   node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           101s                   node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   NodeNotReady             63s                    node-controller  Node ha-856893-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           33s                    node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                     kubelet          Node ha-856893-m04 has been rebooted, boot id: 2ac337d1-652f-40d7-872b-674efdefff16
	  Normal   NodeReady                9s                     kubelet          Node ha-856893-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.908066] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.058276] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065122] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.220079] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.126395] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.300940] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.506884] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.061467] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.368826] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.919640] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.254448] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +6.249182] kauditd_printk_skb: 23 callbacks suppressed
	[Jul 3 23:06] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.915119] kauditd_printk_skb: 24 callbacks suppressed
	[Jul 3 23:12] kauditd_printk_skb: 1 callbacks suppressed
	[Jul 3 23:15] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.164114] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.192920] systemd-fstab-generator[3843]: Ignoring "noauto" option for root device
	[  +0.142672] systemd-fstab-generator[3856]: Ignoring "noauto" option for root device
	[  +0.319393] systemd-fstab-generator[3884]: Ignoring "noauto" option for root device
	[  +0.822708] systemd-fstab-generator[3984]: Ignoring "noauto" option for root device
	[  +4.721060] kauditd_printk_skb: 142 callbacks suppressed
	[Jul 3 23:16] kauditd_printk_skb: 65 callbacks suppressed
	[ +10.072630] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.376413] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb] <==
	{"level":"warn","ts":"2024-07-03T23:14:16.373818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.178602542s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-03T23:14:16.394654Z","caller":"traceutil/trace.go:171","msg":"trace[98229140] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"7.199443125s","start":"2024-07-03T23:14:09.195205Z","end":"2024-07-03T23:14:16.394649Z","steps":["trace[98229140] 'agreement among raft nodes before linearized reading'  (duration: 7.17860713s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:14:16.394681Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T23:14:09.195199Z","time spent":"7.199470157s","remote":"127.0.0.1:33614","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	2024/07/03 23:14:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-03T23:14:16.373949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.242117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-03T23:14:16.394927Z","caller":"traceutil/trace.go:171","msg":"trace[1390875968] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"189.286197ms","start":"2024-07-03T23:14:16.205635Z","end":"2024-07-03T23:14:16.394921Z","steps":["trace[1390875968] 'agreement among raft nodes before linearized reading'  (duration: 168.246533ms)"],"step_count":1}
	2024/07/03 23:14:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-03T23:14:16.664314Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bbf1bb039b0d3451","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-03T23:14:16.664586Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664641Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664672Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664901Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.66514Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665155Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665165Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66518Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665214Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66524Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665322Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665337Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66845Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2024-07-03T23:14:16.668612Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2024-07-03T23:14:16.668642Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-856893","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"]}
	
	
	==> etcd [6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5] <==
	{"level":"warn","ts":"2024-07-03T23:17:24.084549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.127146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.226823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.32611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.364632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.380558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.382192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.427405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.527287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:24.62636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bbf1bb039b0d3451","from":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-03T23:17:25.697556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"81b95bbe226332d2","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:25.697611Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"81b95bbe226332d2","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:26.561267Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.186:2380/version","remote-member-id":"81b95bbe226332d2","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:26.561426Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"81b95bbe226332d2","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:30.564052Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.186:2380/version","remote-member-id":"81b95bbe226332d2","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:30.564203Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"81b95bbe226332d2","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:30.697908Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"81b95bbe226332d2","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-03T23:17:30.698044Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"81b95bbe226332d2","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-03T23:17:34.602163Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.616189Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.616292Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.617595Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bbf1bb039b0d3451","to":"81b95bbe226332d2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-03T23:17:34.617781Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.64127Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bbf1bb039b0d3451","to":"81b95bbe226332d2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-03T23:17:34.641331Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	
	
	==> kernel <==
	 23:18:28 up 13 min,  0 users,  load average: 0.44, 0.40, 0.24
	Linux ha-856893 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221] <==
	I0703 23:15:50.758361       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0703 23:15:50.758446       1 main.go:107] hostIP = 192.168.39.172
	podIP = 192.168.39.172
	I0703 23:15:50.758607       1 main.go:116] setting mtu 1500 for CNI 
	I0703 23:15:50.758647       1 main.go:146] kindnetd IP family: "ipv4"
	I0703 23:15:50.758681       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0703 23:15:56.674585       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:15:59.746340       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:16:10.748273       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0703 23:16:15.106252       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:16:18.178269       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721] <==
	I0703 23:17:53.698325       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:18:03.714999       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:18:03.715206       1 main.go:227] handling current node
	I0703 23:18:03.715328       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:18:03.715391       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:18:03.715600       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:18:03.715677       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:18:03.715868       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:18:03.715928       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:18:13.750485       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:18:13.750577       1 main.go:227] handling current node
	I0703 23:18:13.750601       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:18:13.750610       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:18:13.750948       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:18:13.751001       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:18:13.751085       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:18:13.751095       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:18:23.765122       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:18:23.765167       1 main.go:227] handling current node
	I0703 23:18:23.765180       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:18:23.765186       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:18:23.765327       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I0703 23:18:23.765352       1 main.go:250] Node ha-856893-m03 has CIDR [10.244.2.0/24] 
	I0703 23:18:23.765407       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:18:23.765412       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470] <==
	I0703 23:16:34.685943       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:16:34.686394       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0703 23:16:34.686477       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0703 23:16:34.765542       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:16:34.766947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 23:16:34.767637       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 23:16:34.767917       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 23:16:34.769120       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 23:16:34.769586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:16:34.774311       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0703 23:16:34.782693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186]
	I0703 23:16:34.786827       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:16:34.786904       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:16:34.786929       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:16:34.786935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:16:34.786940       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:16:34.797503       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:16:34.803042       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:16:34.803084       1 policy_source.go:224] refreshing policies
	I0703 23:16:34.867710       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:16:34.885546       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 23:16:34.900698       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0703 23:16:34.906811       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0703 23:16:35.671330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0703 23:16:36.239641       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.157 192.168.39.172 192.168.39.186]
	
	
	==> kube-apiserver [96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55] <==
	I0703 23:15:55.437420       1 options.go:221] external host was not specified, using 192.168.39.172
	I0703 23:15:55.438459       1 server.go:148] Version: v1.30.2
	I0703 23:15:55.438545       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:15:55.783848       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0703 23:15:55.797216       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0703 23:15:55.797264       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0703 23:15:55.797477       1 instance.go:299] Using reconciler: lease
	I0703 23:15:55.797919       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0703 23:16:15.779101       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0703 23:16:15.780795       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0703 23:16:15.798882       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48] <==
	I0703 23:16:47.966245       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0703 23:16:47.967494       1 shared_informer.go:320] Caches are synced for taint
	I0703 23:16:47.967644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0703 23:16:47.967845       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m04"
	I0703 23:16:47.967918       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893"
	I0703 23:16:47.967944       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m02"
	I0703 23:16:47.968002       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-856893-m03"
	I0703 23:16:47.968895       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0703 23:16:48.014444       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:16:48.063171       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:16:48.493694       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:16:48.528121       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:16:48.528233       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0703 23:16:53.016483       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7msgq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7msgq\": the object has been modified; please apply your changes to the latest version and try again"
	I0703 23:16:53.016606       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"38fd1361-ae90-4ed8-bc6d-9e7f39485370", APIVersion:"v1", ResourceVersion:"286", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7msgq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7msgq": the object has been modified; please apply your changes to the latest version and try again
	I0703 23:16:53.045909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.922403ms"
	I0703 23:16:53.046121       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.516µs"
	I0703 23:17:06.773509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.44777ms"
	I0703 23:17:06.775028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.862µs"
	I0703 23:17:25.843620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.741245ms"
	I0703 23:17:25.844098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.749µs"
	I0703 23:17:30.924983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.709µs"
	I0703 23:17:51.092120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.71126ms"
	I0703 23:17:51.094088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.391µs"
	I0703 23:18:19.409640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-856893-m04"
	
	
	==> kube-controller-manager [9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd] <==
	I0703 23:15:56.070479       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:15:56.367195       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0703 23:15:56.367287       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:15:56.369044       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0703 23:15:56.369240       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:15:56.369262       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0703 23:15:56.369281       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0703 23:16:16.810059       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.172:8443/healthz\": dial tcp 192.168.39.172:8443: connect: connection refused"
	
	
	==> kube-proxy [992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514] <==
	I0703 23:15:56.419245       1 server_linux.go:69] "Using iptables proxy"
	E0703 23:15:56.866965       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:15:59.938830       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:03.010438       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:09.155261       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:18.371135       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0703 23:16:37.234065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	I0703 23:16:37.281783       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:16:37.282091       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:16:37.282200       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:16:37.286170       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:16:37.286435       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:16:37.288971       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:16:37.291980       1 config.go:192] "Starting service config controller"
	I0703 23:16:37.292030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:16:37.292179       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:16:37.292235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:16:37.294266       1 config.go:319] "Starting node config controller"
	I0703 23:16:37.294356       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:16:37.393168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:16:37.393273       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:16:37.394654       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599] <==
	E0703 23:13:04.387248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:23.266268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:23.267260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:23.267188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:23.267799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:26.339320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:26.339470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:44.770394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:44.770466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:47.843140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:47.843454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:50.916009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:50.916291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a] <==
	W0703 23:16:26.343713       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.172:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:26.343959       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.172:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:26.570960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.172:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:26.571044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.172:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:26.798389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.172:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:26.798456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.172:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:31.348405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.172:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:31.348471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.172:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:31.721619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.172:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:31.721684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.172:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.106383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.172:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.106464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.172:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.387135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.172:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.387296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.172:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.473071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.172:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.473167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.172:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.629405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.629507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:34.708202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:16:34.708256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:16:34.708340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:16:34.708371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 23:16:34.708455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:16:34.708482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0703 23:16:35.015836       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0] <==
	W0703 23:14:09.917124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:09.917225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:09.918153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:09.918228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:10.388475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0703 23:14:10.388617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0703 23:14:10.781368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:14:10.781446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:14:11.115685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:14:11.115788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:14:11.134607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 23:14:11.134655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 23:14:11.226494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:14:11.226612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:14:11.400631       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:14:11.400782       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:14:11.769304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:11.769531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:11.930228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:14:11.930441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:14:12.215475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:14:12.215603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:14:15.995916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:14:15.995946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:14:16.367428       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 03 23:16:36 ha-856893 kubelet[1363]: W0703 23:16:36.802110    1363 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=1806": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 03 23:16:36 ha-856893 kubelet[1363]: E0703 23:16:36.802530    1363 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=1806": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 03 23:16:36 ha-856893 kubelet[1363]: I0703 23:16:36.802222    1363 status_manager.go:853] "Failed to get status for pod" podUID="91a46ca4520a8b4010e5767e6b78e3c9" pod="kube-system/kube-vip-ha-856893" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 03 23:16:42 ha-856893 kubelet[1363]: I0703 23:16:42.734428    1363 scope.go:117] "RemoveContainer" containerID="e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24"
	Jul 03 23:16:42 ha-856893 kubelet[1363]: E0703 23:16:42.735090    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8)\"" pod="kube-system/storage-provisioner" podUID="91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8"
	Jul 03 23:16:46 ha-856893 kubelet[1363]: I0703 23:16:46.733978    1363 scope.go:117] "RemoveContainer" containerID="173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	Jul 03 23:16:46 ha-856893 kubelet[1363]: E0703 23:16:46.734325    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-h7ntk_kube-system(18e6d992-2713-4399-a160-5f9196981f26)\"" pod="kube-system/kindnet-h7ntk" podUID="18e6d992-2713-4399-a160-5f9196981f26"
	Jul 03 23:16:56 ha-856893 kubelet[1363]: I0703 23:16:56.734469    1363 scope.go:117] "RemoveContainer" containerID="e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24"
	Jul 03 23:16:57 ha-856893 kubelet[1363]: I0703 23:16:57.742153    1363 scope.go:117] "RemoveContainer" containerID="173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	Jul 03 23:16:57 ha-856893 kubelet[1363]: E0703 23:16:57.742646    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-h7ntk_kube-system(18e6d992-2713-4399-a160-5f9196981f26)\"" pod="kube-system/kindnet-h7ntk" podUID="18e6d992-2713-4399-a160-5f9196981f26"
	Jul 03 23:17:11 ha-856893 kubelet[1363]: I0703 23:17:11.448019    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-hh5rx" podStartSLOduration=525.828397226 podStartE2EDuration="8m48.447960314s" podCreationTimestamp="2024-07-03 23:08:23 +0000 UTC" firstStartedPulling="2024-07-03 23:08:24.51916307 +0000 UTC m=+176.932471051" lastFinishedPulling="2024-07-03 23:08:27.138726159 +0000 UTC m=+179.552034139" observedRunningTime="2024-07-03 23:08:27.5075502 +0000 UTC m=+179.920858172" watchObservedRunningTime="2024-07-03 23:17:11.447960314 +0000 UTC m=+703.861268302"
	Jul 03 23:17:12 ha-856893 kubelet[1363]: I0703 23:17:12.734265    1363 scope.go:117] "RemoveContainer" containerID="173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	Jul 03 23:17:27 ha-856893 kubelet[1363]: E0703 23:17:27.750426    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:17:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:17:28 ha-856893 kubelet[1363]: I0703 23:17:28.734517    1363 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-856893" podUID="0c4a20fd-99f2-4d6a-a332-2a79e4431b88"
	Jul 03 23:17:28 ha-856893 kubelet[1363]: I0703 23:17:28.750943    1363 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-856893"
	Jul 03 23:17:37 ha-856893 kubelet[1363]: I0703 23:17:37.755560    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-856893" podStartSLOduration=9.755479054 podStartE2EDuration="9.755479054s" podCreationTimestamp="2024-07-03 23:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-03 23:17:37.755197984 +0000 UTC m=+730.168505972" watchObservedRunningTime="2024-07-03 23:17:37.755479054 +0000 UTC m=+730.168787043"
	Jul 03 23:18:27 ha-856893 kubelet[1363]: E0703 23:18:27.769815    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:18:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:18:26.771245   34738 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18998-9396/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-856893 -n ha-856893
helpers_test.go:261: (dbg) Run:  kubectl --context ha-856893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 stop -v=7 --alsologtostderr
E0703 23:18:57.357930   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-856893 stop -v=7 --alsologtostderr: exit status 82 (2m0.468824625s)

                                                
                                                
-- stdout --
	* Stopping node "ha-856893-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:18:47.069645   35179 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:18:47.069894   35179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:18:47.069904   35179 out.go:304] Setting ErrFile to fd 2...
	I0703 23:18:47.069907   35179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:18:47.070077   35179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:18:47.070293   35179 out.go:298] Setting JSON to false
	I0703 23:18:47.070375   35179 mustload.go:65] Loading cluster: ha-856893
	I0703 23:18:47.070895   35179 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:18:47.070992   35179 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:18:47.071215   35179 mustload.go:65] Loading cluster: ha-856893
	I0703 23:18:47.071348   35179 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:18:47.071376   35179 stop.go:39] StopHost: ha-856893-m04
	I0703 23:18:47.071785   35179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:18:47.071850   35179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:18:47.086804   35179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0703 23:18:47.087275   35179 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:18:47.087827   35179 main.go:141] libmachine: Using API Version  1
	I0703 23:18:47.087858   35179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:18:47.088225   35179 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:18:47.090279   35179 out.go:177] * Stopping node "ha-856893-m04"  ...
	I0703 23:18:47.091984   35179 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0703 23:18:47.092024   35179 main.go:141] libmachine: (ha-856893-m04) Calling .DriverName
	I0703 23:18:47.092281   35179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0703 23:18:47.092308   35179 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHHostname
	I0703 23:18:47.095206   35179 main.go:141] libmachine: (ha-856893-m04) DBG | domain ha-856893-m04 has defined MAC address 52:54:00:e6:5d:92 in network mk-ha-856893
	I0703 23:18:47.095577   35179 main.go:141] libmachine: (ha-856893-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:5d:92", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:18:13 +0000 UTC Type:0 Mac:52:54:00:e6:5d:92 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-856893-m04 Clientid:01:52:54:00:e6:5d:92}
	I0703 23:18:47.095611   35179 main.go:141] libmachine: (ha-856893-m04) DBG | domain ha-856893-m04 has defined IP address 192.168.39.195 and MAC address 52:54:00:e6:5d:92 in network mk-ha-856893
	I0703 23:18:47.095753   35179 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHPort
	I0703 23:18:47.095923   35179 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHKeyPath
	I0703 23:18:47.096207   35179 main.go:141] libmachine: (ha-856893-m04) Calling .GetSSHUsername
	I0703 23:18:47.096353   35179 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893-m04/id_rsa Username:docker}
	I0703 23:18:47.179070   35179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0703 23:18:47.233059   35179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0703 23:18:47.285981   35179 main.go:141] libmachine: Stopping "ha-856893-m04"...
	I0703 23:18:47.286010   35179 main.go:141] libmachine: (ha-856893-m04) Calling .GetState
	I0703 23:18:47.287678   35179 main.go:141] libmachine: (ha-856893-m04) Calling .Stop
	I0703 23:18:47.291468   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 0/120
	I0703 23:18:48.293241   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 1/120
	I0703 23:18:49.294756   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 2/120
	I0703 23:18:50.296205   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 3/120
	I0703 23:18:51.298262   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 4/120
	I0703 23:18:52.300294   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 5/120
	I0703 23:18:53.302223   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 6/120
	I0703 23:18:54.303385   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 7/120
	I0703 23:18:55.304805   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 8/120
	I0703 23:18:56.306279   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 9/120
	I0703 23:18:57.308634   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 10/120
	I0703 23:18:58.311045   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 11/120
	I0703 23:18:59.312399   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 12/120
	I0703 23:19:00.314294   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 13/120
	I0703 23:19:01.316548   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 14/120
	I0703 23:19:02.318086   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 15/120
	I0703 23:19:03.319408   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 16/120
	I0703 23:19:04.320864   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 17/120
	I0703 23:19:05.322640   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 18/120
	I0703 23:19:06.324708   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 19/120
	I0703 23:19:07.326828   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 20/120
	I0703 23:19:08.328336   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 21/120
	I0703 23:19:09.329720   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 22/120
	I0703 23:19:10.331005   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 23/120
	I0703 23:19:11.332938   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 24/120
	I0703 23:19:12.334900   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 25/120
	I0703 23:19:13.336723   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 26/120
	I0703 23:19:14.338221   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 27/120
	I0703 23:19:15.339445   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 28/120
	I0703 23:19:16.340735   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 29/120
	I0703 23:19:17.342046   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 30/120
	I0703 23:19:18.343386   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 31/120
	I0703 23:19:19.344736   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 32/120
	I0703 23:19:20.346120   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 33/120
	I0703 23:19:21.347793   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 34/120
	I0703 23:19:22.349733   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 35/120
	I0703 23:19:23.351166   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 36/120
	I0703 23:19:24.352492   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 37/120
	I0703 23:19:25.353811   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 38/120
	I0703 23:19:26.355112   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 39/120
	I0703 23:19:27.357044   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 40/120
	I0703 23:19:28.358554   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 41/120
	I0703 23:19:29.359987   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 42/120
	I0703 23:19:30.361384   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 43/120
	I0703 23:19:31.362789   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 44/120
	I0703 23:19:32.364856   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 45/120
	I0703 23:19:33.366166   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 46/120
	I0703 23:19:34.367710   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 47/120
	I0703 23:19:35.368867   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 48/120
	I0703 23:19:36.370352   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 49/120
	I0703 23:19:37.372607   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 50/120
	I0703 23:19:38.374450   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 51/120
	I0703 23:19:39.375727   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 52/120
	I0703 23:19:40.377080   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 53/120
	I0703 23:19:41.378522   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 54/120
	I0703 23:19:42.380518   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 55/120
	I0703 23:19:43.382325   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 56/120
	I0703 23:19:44.384008   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 57/120
	I0703 23:19:45.385339   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 58/120
	I0703 23:19:46.386892   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 59/120
	I0703 23:19:47.388841   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 60/120
	I0703 23:19:48.390224   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 61/120
	I0703 23:19:49.391430   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 62/120
	I0703 23:19:50.392924   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 63/120
	I0703 23:19:51.394932   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 64/120
	I0703 23:19:52.396769   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 65/120
	I0703 23:19:53.398288   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 66/120
	I0703 23:19:54.400372   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 67/120
	I0703 23:19:55.402353   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 68/120
	I0703 23:19:56.404942   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 69/120
	I0703 23:19:57.407060   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 70/120
	I0703 23:19:58.408487   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 71/120
	I0703 23:19:59.410326   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 72/120
	I0703 23:20:00.411790   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 73/120
	I0703 23:20:01.413024   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 74/120
	I0703 23:20:02.415018   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 75/120
	I0703 23:20:03.416453   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 76/120
	I0703 23:20:04.417827   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 77/120
	I0703 23:20:05.419260   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 78/120
	I0703 23:20:06.420714   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 79/120
	I0703 23:20:07.422896   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 80/120
	I0703 23:20:08.424399   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 81/120
	I0703 23:20:09.425677   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 82/120
	I0703 23:20:10.427133   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 83/120
	I0703 23:20:11.428703   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 84/120
	I0703 23:20:12.430738   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 85/120
	I0703 23:20:13.432248   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 86/120
	I0703 23:20:14.434359   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 87/120
	I0703 23:20:15.435865   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 88/120
	I0703 23:20:16.437183   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 89/120
	I0703 23:20:17.439500   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 90/120
	I0703 23:20:18.441046   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 91/120
	I0703 23:20:19.442500   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 92/120
	I0703 23:20:20.444037   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 93/120
	I0703 23:20:21.446385   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 94/120
	I0703 23:20:22.448482   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 95/120
	I0703 23:20:23.450391   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 96/120
	I0703 23:20:24.451696   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 97/120
	I0703 23:20:25.453068   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 98/120
	I0703 23:20:26.454346   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 99/120
	I0703 23:20:27.456561   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 100/120
	I0703 23:20:28.458329   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 101/120
	I0703 23:20:29.459646   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 102/120
	I0703 23:20:30.460834   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 103/120
	I0703 23:20:31.462169   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 104/120
	I0703 23:20:32.463929   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 105/120
	I0703 23:20:33.465460   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 106/120
	I0703 23:20:34.466972   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 107/120
	I0703 23:20:35.468461   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 108/120
	I0703 23:20:36.470470   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 109/120
	I0703 23:20:37.472173   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 110/120
	I0703 23:20:38.474358   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 111/120
	I0703 23:20:39.475570   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 112/120
	I0703 23:20:40.477262   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 113/120
	I0703 23:20:41.478557   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 114/120
	I0703 23:20:42.480053   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 115/120
	I0703 23:20:43.482524   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 116/120
	I0703 23:20:44.484311   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 117/120
	I0703 23:20:45.486461   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 118/120
	I0703 23:20:46.487712   35179 main.go:141] libmachine: (ha-856893-m04) Waiting for machine to stop 119/120
	I0703 23:20:47.488770   35179 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0703 23:20:47.488847   35179 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0703 23:20:47.490882   35179 out.go:177] 
	W0703 23:20:47.492346   35179 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0703 23:20:47.492364   35179 out.go:239] * 
	* 
	W0703 23:20:47.494632   35179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0703 23:20:47.495887   35179 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-856893 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr: (18.994858608s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-856893 -n ha-856893
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 logs -n 25: (1.860395356s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m04 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp testdata/cp-test.txt                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt                      |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893 sudo cat                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893.txt                                |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m02 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n                                                                | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | ha-856893-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-856893 ssh -n ha-856893-m03 sudo cat                                         | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC | 03 Jul 24 23:09 UTC |
	|         | /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-856893 node stop m02 -v=7                                                    | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-856893 node start m02 -v=7                                                   | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-856893 -v=7                                                          | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-856893 -v=7                                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-856893 --wait=true -v=7                                                   | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:14 UTC | 03 Jul 24 23:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-856893                                                               | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:18 UTC |                     |
	| node    | ha-856893 node delete m03 -v=7                                                  | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:18 UTC | 03 Jul 24 23:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-856893 stop -v=7                                                             | ha-856893 | jenkins | v1.33.1 | 03 Jul 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:14:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:14:15.505195   32963 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:14:15.505421   32963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:15.505434   32963 out.go:304] Setting ErrFile to fd 2...
	I0703 23:14:15.505438   32963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:15.505622   32963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:14:15.506160   32963 out.go:298] Setting JSON to false
	I0703 23:14:15.507032   32963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3395,"bootTime":1720045060,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:14:15.507088   32963 start.go:139] virtualization: kvm guest
	I0703 23:14:15.509480   32963 out.go:177] * [ha-856893] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:14:15.510960   32963 notify.go:220] Checking for updates...
	I0703 23:14:15.510984   32963 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:14:15.512418   32963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:14:15.513803   32963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:14:15.515044   32963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:14:15.516254   32963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:14:15.517426   32963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:14:15.518892   32963 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:14:15.518982   32963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:14:15.519370   32963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:14:15.519418   32963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:14:15.534605   32963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0703 23:14:15.535077   32963 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:14:15.535690   32963 main.go:141] libmachine: Using API Version  1
	I0703 23:14:15.535712   32963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:14:15.536160   32963 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:14:15.536353   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.571249   32963 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:14:15.572635   32963 start.go:297] selected driver: kvm2
	I0703 23:14:15.572648   32963 start.go:901] validating driver "kvm2" against &{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:14:15.572787   32963 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:14:15.573100   32963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:14:15.573166   32963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:14:15.589250   32963 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:14:15.589927   32963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:14:15.589988   32963 cni.go:84] Creating CNI manager for ""
	I0703 23:14:15.589999   32963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0703 23:14:15.590055   32963 start.go:340] cluster config:
	{Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:14:15.590162   32963 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:14:15.591898   32963 out.go:177] * Starting "ha-856893" primary control-plane node in "ha-856893" cluster
	I0703 23:14:15.593080   32963 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:14:15.593114   32963 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:14:15.593120   32963 cache.go:56] Caching tarball of preloaded images
	I0703 23:14:15.593190   32963 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:14:15.593200   32963 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:14:15.593322   32963 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/config.json ...
	I0703 23:14:15.593515   32963 start.go:360] acquireMachinesLock for ha-856893: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:14:15.593553   32963 start.go:364] duration metric: took 22.508µs to acquireMachinesLock for "ha-856893"
	I0703 23:14:15.593566   32963 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:14:15.593579   32963 fix.go:54] fixHost starting: 
	I0703 23:14:15.593852   32963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:14:15.593879   32963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:14:15.608280   32963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I0703 23:14:15.608707   32963 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:14:15.609207   32963 main.go:141] libmachine: Using API Version  1
	I0703 23:14:15.609225   32963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:14:15.609519   32963 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:14:15.609676   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.609831   32963 main.go:141] libmachine: (ha-856893) Calling .GetState
	I0703 23:14:15.611553   32963 fix.go:112] recreateIfNeeded on ha-856893: state=Running err=<nil>
	W0703 23:14:15.611576   32963 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:14:15.614314   32963 out.go:177] * Updating the running kvm2 "ha-856893" VM ...
	I0703 23:14:15.615766   32963 machine.go:94] provisionDockerMachine start ...
	I0703 23:14:15.615791   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:14:15.616099   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.618632   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.619043   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.619076   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.619251   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.619449   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.619627   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.619809   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.619994   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.620209   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.620225   32963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:14:15.733758   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:14:15.733784   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.734043   32963 buildroot.go:166] provisioning hostname "ha-856893"
	I0703 23:14:15.734067   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.734233   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.736657   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.736959   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.736984   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.737089   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.737252   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.737428   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.737582   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.737727   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.737883   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.737894   32963 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-856893 && echo "ha-856893" | sudo tee /etc/hostname
	I0703 23:14:15.864753   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-856893
	
	I0703 23:14:15.864783   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.867338   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.867809   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.867842   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.868001   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:15.868182   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.868354   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:15.868514   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:15.868666   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:15.868836   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:15.868858   32963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-856893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-856893/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-856893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:14:15.977662   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:14:15.977689   32963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:14:15.977720   32963 buildroot.go:174] setting up certificates
	I0703 23:14:15.977730   32963 provision.go:84] configureAuth start
	I0703 23:14:15.977737   32963 main.go:141] libmachine: (ha-856893) Calling .GetMachineName
	I0703 23:14:15.977994   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:14:15.980883   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.981226   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.981255   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.981411   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:15.983677   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.984003   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:15.984037   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:15.984185   32963 provision.go:143] copyHostCerts
	I0703 23:14:15.984220   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:14:15.984281   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:14:15.984297   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:14:15.984395   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:14:15.984483   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:14:15.984508   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:14:15.984514   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:14:15.984550   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:14:15.984609   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:14:15.984631   32963 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:14:15.984639   32963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:14:15.984676   32963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:14:15.984740   32963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.ha-856893 san=[127.0.0.1 192.168.39.172 ha-856893 localhost minikube]
	I0703 23:14:16.058974   32963 provision.go:177] copyRemoteCerts
	I0703 23:14:16.059025   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:14:16.059044   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:16.061936   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.062334   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:16.062362   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.062538   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:16.062759   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.062972   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:16.063133   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:14:16.147890   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:14:16.147964   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:14:16.176035   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:14:16.176091   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0703 23:14:16.204703   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:14:16.204759   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:14:16.237017   32963 provision.go:87] duration metric: took 259.276547ms to configureAuth
	I0703 23:14:16.237049   32963 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:14:16.237323   32963 config.go:182] Loaded profile config "ha-856893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:14:16.237402   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:14:16.240332   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.240759   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:14:16.240780   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:14:16.240978   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:14:16.241153   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.241301   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:14:16.241425   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:14:16.241588   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:14:16.241752   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:14:16.241776   32963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:15:47.221094   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:15:47.221121   32963 machine.go:97] duration metric: took 1m31.605338399s to provisionDockerMachine
	I0703 23:15:47.221135   32963 start.go:293] postStartSetup for "ha-856893" (driver="kvm2")
	I0703 23:15:47.221147   32963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:15:47.221166   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.221445   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:15:47.221468   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.224528   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.224968   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.224997   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.225143   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.225327   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.225455   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.225594   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.312057   32963 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:15:47.316344   32963 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:15:47.316371   32963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:15:47.316443   32963 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:15:47.316521   32963 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:15:47.316532   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:15:47.316628   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:15:47.328708   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:15:47.355915   32963 start.go:296] duration metric: took 134.766345ms for postStartSetup
	I0703 23:15:47.355954   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.356294   32963 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0703 23:15:47.356342   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.358969   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.359525   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.359563   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.359717   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.359919   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.360109   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.360262   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	W0703 23:15:47.443227   32963 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0703 23:15:47.443260   32963 fix.go:56] duration metric: took 1m31.849677841s for fixHost
	I0703 23:15:47.443284   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.446292   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.446743   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.446772   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.446939   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.447152   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.447322   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.447463   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.447620   32963 main.go:141] libmachine: Using SSH client type: native
	I0703 23:15:47.447817   32963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0703 23:15:47.447830   32963 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:15:47.556879   32963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720048547.528498017
	
	I0703 23:15:47.556904   32963 fix.go:216] guest clock: 1720048547.528498017
	I0703 23:15:47.556910   32963 fix.go:229] Guest: 2024-07-03 23:15:47.528498017 +0000 UTC Remote: 2024-07-03 23:15:47.443267292 +0000 UTC m=+91.974175532 (delta=85.230725ms)
	I0703 23:15:47.556939   32963 fix.go:200] guest clock delta is within tolerance: 85.230725ms
	I0703 23:15:47.556944   32963 start.go:83] releasing machines lock for "ha-856893", held for 1m31.963382331s
	I0703 23:15:47.556970   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.557225   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:15:47.559537   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.559898   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.559930   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.560044   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560578   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560755   32963 main.go:141] libmachine: (ha-856893) Calling .DriverName
	I0703 23:15:47.560847   32963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:15:47.560887   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.561030   32963 ssh_runner.go:195] Run: cat /version.json
	I0703 23:15:47.561053   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHHostname
	I0703 23:15:47.563290   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563566   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.563588   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563802   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.563817   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.563986   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.564128   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.564137   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:47.564162   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:47.564267   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.564325   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHPort
	I0703 23:15:47.564460   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHKeyPath
	I0703 23:15:47.564606   32963 main.go:141] libmachine: (ha-856893) Calling .GetSSHUsername
	I0703 23:15:47.564737   32963 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/ha-856893/id_rsa Username:docker}
	I0703 23:15:47.673980   32963 ssh_runner.go:195] Run: systemctl --version
	I0703 23:15:47.681930   32963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:15:47.848558   32963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:15:47.856647   32963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:15:47.856715   32963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:15:47.867396   32963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 23:15:47.867424   32963 start.go:494] detecting cgroup driver to use...
	I0703 23:15:47.867491   32963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:15:47.886645   32963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:15:47.901863   32963 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:15:47.901937   32963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:15:47.917569   32963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:15:47.932869   32963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:15:48.094073   32963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:15:48.245795   32963 docker.go:233] disabling docker service ...
	I0703 23:15:48.245857   32963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:15:48.265791   32963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:15:48.281242   32963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:15:48.443062   32963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:15:48.603249   32963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:15:48.621318   32963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:15:48.641720   32963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:15:48.641783   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.653742   32963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:15:48.653810   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.665440   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.677089   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.689170   32963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:15:48.701318   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.713020   32963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.725564   32963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:15:48.737500   32963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:15:48.748546   32963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:15:48.759493   32963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:15:48.907542   32963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:15:49.193834   32963 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:15:49.193905   32963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:15:49.200062   32963 start.go:562] Will wait 60s for crictl version
	I0703 23:15:49.200122   32963 ssh_runner.go:195] Run: which crictl
	I0703 23:15:49.207527   32963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:15:49.251943   32963 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:15:49.252024   32963 ssh_runner.go:195] Run: crio --version
	I0703 23:15:49.285970   32963 ssh_runner.go:195] Run: crio --version
	I0703 23:15:49.318481   32963 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:15:49.320058   32963 main.go:141] libmachine: (ha-856893) Calling .GetIP
	I0703 23:15:49.322667   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:49.322996   32963 main.go:141] libmachine: (ha-856893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:43:23", ip: ""} in network mk-ha-856893: {Iface:virbr1 ExpiryTime:2024-07-04 00:05:03 +0000 UTC Type:0 Mac:52:54:00:f8:43:23 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-856893 Clientid:01:52:54:00:f8:43:23}
	I0703 23:15:49.323020   32963 main.go:141] libmachine: (ha-856893) DBG | domain ha-856893 has defined IP address 192.168.39.172 and MAC address 52:54:00:f8:43:23 in network mk-ha-856893
	I0703 23:15:49.323260   32963 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:15:49.328649   32963 kubeadm.go:877] updating cluster {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:15:49.328834   32963 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:15:49.328901   32963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:15:49.375491   32963 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:15:49.375518   32963 crio.go:433] Images already preloaded, skipping extraction
	I0703 23:15:49.375572   32963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:15:49.417464   32963 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:15:49.417488   32963 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:15:49.417499   32963 kubeadm.go:928] updating node { 192.168.39.172 8443 v1.30.2 crio true true} ...
	I0703 23:15:49.417623   32963 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-856893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:15:49.417715   32963 ssh_runner.go:195] Run: crio config
	I0703 23:15:49.468201   32963 cni.go:84] Creating CNI manager for ""
	I0703 23:15:49.468228   32963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0703 23:15:49.468239   32963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:15:49.468271   32963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-856893 NodeName:ha-856893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:15:49.468440   32963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-856893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:15:49.468460   32963 kube-vip.go:115] generating kube-vip config ...
	I0703 23:15:49.468520   32963 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0703 23:15:49.480861   32963 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0703 23:15:49.480988   32963 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0703 23:15:49.481057   32963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:15:49.491415   32963 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:15:49.491482   32963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0703 23:15:49.501642   32963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0703 23:15:49.519059   32963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:15:49.537068   32963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0703 23:15:49.554569   32963 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0703 23:15:49.571468   32963 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0703 23:15:49.576348   32963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:15:49.729118   32963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:15:49.744081   32963 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893 for IP: 192.168.39.172
	I0703 23:15:49.744100   32963 certs.go:194] generating shared ca certs ...
	I0703 23:15:49.744139   32963 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.744296   32963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:15:49.744349   32963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:15:49.744359   32963 certs.go:256] generating profile certs ...
	I0703 23:15:49.744470   32963 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/client.key
	I0703 23:15:49.744513   32963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344
	I0703 23:15:49.744532   32963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172 192.168.39.157 192.168.39.186 192.168.39.254]
	I0703 23:15:49.956081   32963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 ...
	I0703 23:15:49.956111   32963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344: {Name:mk41a659d1acf59169903bff6a6d6448b514fd9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.956319   32963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344 ...
	I0703 23:15:49.956334   32963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344: {Name:mk177201bb8bead0456f9a899371f0d4d70690e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:15:49.956428   32963 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt.40fbd344 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt
	I0703 23:15:49.956598   32963 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key.40fbd344 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key
	I0703 23:15:49.956768   32963 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key
	I0703 23:15:49.956787   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:15:49.956805   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:15:49.956823   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:15:49.956840   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:15:49.956856   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:15:49.956870   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:15:49.956888   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:15:49.956905   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:15:49.956969   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:15:49.957008   32963 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:15:49.957022   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:15:49.957058   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:15:49.957088   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:15:49.957120   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:15:49.957174   32963 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:15:49.957208   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:49.957226   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:15:49.957244   32963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:15:49.957820   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:15:49.985686   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:15:50.011951   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:15:50.043394   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:15:50.106401   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0703 23:15:50.171093   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:15:50.210424   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:15:50.265483   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/ha-856893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:15:50.295605   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:15:50.349553   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:15:50.402425   32963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:15:50.445013   32963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:15:50.473358   32963 ssh_runner.go:195] Run: openssl version
	I0703 23:15:50.486131   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:15:50.513267   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.524170   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.524235   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:15:50.532553   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:15:50.543275   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:15:50.555917   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.560719   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.560765   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:15:50.566633   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:15:50.576894   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:15:50.598154   32963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.602803   32963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.602844   32963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:15:50.608467   32963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:15:50.618539   32963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:15:50.623271   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 23:15:50.629535   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 23:15:50.635627   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 23:15:50.641352   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 23:15:50.647135   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 23:15:50.653249   32963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 23:15:50.659566   32963 kubeadm.go:391] StartCluster: {Name:ha-856893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-856893 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.195 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:15:50.659678   32963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:15:50.659746   32963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:15:50.706641   32963 cri.go:89] found id: "173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	I0703 23:15:50.706660   32963 cri.go:89] found id: "072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc"
	I0703 23:15:50.706664   32963 cri.go:89] found id: "9fb7cca0e0f0d80a9a145b4cc7d5e4e90af46d651bc0725c6186be8ec737120f"
	I0703 23:15:50.706667   32963 cri.go:89] found id: "a67095c5f0151deec8b4babb63aa353888c6c2f268e462ea236de00624bce508"
	I0703 23:15:50.706670   32963 cri.go:89] found id: "21272d5241be2eae198709be303566744f455806f0ebffba408cf58e6707cefd"
	I0703 23:15:50.706673   32963 cri.go:89] found id: "856a70d4253722d9b95f44209d4c629ef26d3e9c2b15bd4b1b4543050f9d1cf0"
	I0703 23:15:50.706675   32963 cri.go:89] found id: "f4741a1d62d7eb84ae4d4c1ce086bf13f1d54396ebf9a27932c6e784027fc371"
	I0703 23:15:50.706678   32963 cri.go:89] found id: "4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54"
	I0703 23:15:50.706680   32963 cri.go:89] found id: "ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691"
	I0703 23:15:50.706685   32963 cri.go:89] found id: "aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599"
	I0703 23:15:50.706687   32963 cri.go:89] found id: "4c81f0becbc3b6b33b1077d73eb4737ab22fbf870d53cf57b8cbfb88c2b7389e"
	I0703 23:15:50.706689   32963 cri.go:89] found id: "227a9a4176778afb3428cff8333cb0265f741d600f6fab7c86b069a27619893e"
	I0703 23:15:50.706692   32963 cri.go:89] found id: "8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0"
	I0703 23:15:50.706695   32963 cri.go:89] found id: "4c379ddaf9a499d07429805170dc12cb5a1dc67dcb956fe90f1fa68d12530112"
	I0703 23:15:50.706700   32963 cri.go:89] found id: "194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb"
	I0703 23:15:50.706704   32963 cri.go:89] found id: ""
	I0703 23:15:50.706751   32963 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.153322954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048867153298715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41b7a93e-4608-48ef-b905-b3b9acb37602 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.154210621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4085832c-547c-432f-aec4-c9aa399ea6a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.154289930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4085832c-547c-432f-aec4-c9aa399ea6a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.154839392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4085832c-547c-432f-aec4-c9aa399ea6a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.207693291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f085804-b5a4-4647-8dfa-67d87d327cda name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.207856620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f085804-b5a4-4647-8dfa-67d87d327cda name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.209262474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74033fa8-8f0c-4af3-bd2e-522ef1ab0d26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.209816630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048867209717674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74033fa8-8f0c-4af3-bd2e-522ef1ab0d26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.210487163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54aa232e-54ff-441c-abdb-869942d56efd name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.210567132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54aa232e-54ff-441c-abdb-869942d56efd name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.211157538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54aa232e-54ff-441c-abdb-869942d56efd name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.259183594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cbc5264-19c1-43a9-9e6d-49c45f1a1fef name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.259292954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cbc5264-19c1-43a9-9e6d-49c45f1a1fef name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.260454000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a986916f-2978-4a90-9c77-cb0799a7d5ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.260979108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048867260952983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a986916f-2978-4a90-9c77-cb0799a7d5ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.261650769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb6f5c0a-7920-4b4f-a3d1-4c647a6d7af8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.261715324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb6f5c0a-7920-4b4f-a3d1-4c647a6d7af8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.262339277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb6f5c0a-7920-4b4f-a3d1-4c647a6d7af8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.315318773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e239074b-faee-43aa-a2d2-21cd85e2fde5 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.315398484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e239074b-faee-43aa-a2d2-21cd85e2fde5 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.316607975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa4e9e26-2f26-4717-884f-86d69f6ee74c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.317271593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720048867317244709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa4e9e26-2f26-4717-884f-86d69f6ee74c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.318225920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d284663-617f-42aa-b0c6-4840dac96c3f name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.318296456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d284663-617f-42aa-b0c6-4840dac96c3f name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:21:07 ha-856893 crio[3898]: time="2024-07-03 23:21:07.318722287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720048632745019598,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627192b28b0d135dd2f950c36f2c2e7be3ab401e90de043d0be58c7dd89d6f9d,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720048616744488429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720048595746702593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720048592746381823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abb9e5fc1177e6c97e829fb1bbd7fae408d6546c2a370e48c13ff7de49de0d5,PodSandboxId:3cea8a5014a038c6f9bc66df77c85b08f8bb4b7be55b8e83a63494b3cab53969,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720048588469543652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernetes.container.hash: c94081f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d081acbf6adacad64ff76eb885b8d94687b89ad150784e2082b529d5c1dbb68,PodSandboxId:0d1c5fedc17c79c5e067b89a6bec81a3a88639aedacca352b116e02a7d134277,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720048569260459952,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91a46ca4520a8b4010e5767e6b78e3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514,PodSandboxId:41bc64df47de6d125aacb9f38fcd072379a3bdd7596e70d22cb67597e6123b82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720048555168524527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e4fc2ed85817cd75513ff0f1df3a149088591f33d36e5db87cb10583e72ffe24,PodSandboxId:05ac5d24180a3ac3668b07762edc66a8e7c6b1560e3ff63160ef72ada022ac64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720048554914201814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c6a4a7-b5fb-4f6f-b65f-f3c2e4ece3b8,},Annotations:map[string]string{io.kubernetes.container.hash: eb436c14,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7a62bd
d1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44,PodSandboxId:620e31e0276103fc78f7e00252c0431863538298863bd888ee31ffae3ed7284c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048554877088966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd,PodSandboxId:0e28d529014bf11447aaefef72a5779d28bde554f4e49ffbe6eaa0d7a86b4b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720048554776389005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f238ffd8748e557f239482399bf89dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5,PodSandboxId:6192fa5bbb48f1a2c9eb72d53756ff6349905fdc9179ba2994180f9786881427,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720048554639140110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a,PodSandboxId:f4d9c612a69e512280e76149e0796be1d77e02f348b8f05988cb291c3ba4b66e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720048554771834414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55,PodSandboxId:4c44635e56bb3bbfe03c94be304f02bd3d9dfbc69cdaa61651674003dd8cac06,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720048554682075988,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fee9f6b7b1f394539107bfaf70ec2c,},Annotations:map[string]string{io.kubernetes.container.hash: a0e763da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221,PodSandboxId:a0b2a60d87f5b26fc7a79efde9ce4bdbf0230a85cea51f609d062f37f6b683ba,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720048550431469233,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h7ntk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18e6d992-2713-4399-a160-5f9196981f26,},Annotations:map[string]string{io.kubernetes.container.hash: 6d6c98bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc,PodSandboxId:f346b1aac5151feacba1181147ee71021e95e27a89bd0738e3051c1324c2c8cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720048550318921914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5f2f09a864e4e5d46640be63d6a9d8f2d281cd3cce2631529f1fa6c5c19ead,PodSandboxId:2add57c6feb6d512f3af09ba6db55431d507392fc5941a67d046de3f5fb16947,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720048107151295802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hh5rx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e907d89-dcf0-4e2d-bf2d-812d38932e86,},Annotations:map[string]string{io.kubernete
s.container.hash: c94081f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54,PodSandboxId:52adb03e9908b6aedd8e970016768db80c238abd0e91c3401aa2a20a7b01842e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973272193548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n5tdf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efbbc3c-e2d5-4f13-8672-cf7524f72e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 903ec5b,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691,PodSandboxId:75824b8079291200eb2fb0fe03930e27dec63b2faabdb210910f0fb5c5fce5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720047973246603530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-pwqfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d22edf-e718-4755-b211-c8279481005e,},Annotations:map[string]string{io.kubernetes.container.hash: 175c19c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599,PodSandboxId:17315e93de095c706012aca1c08e044f3e9b408633e0cfb0e7a5117c73d1017b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720047942480299502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52zqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc16d2-e9f6-487f-a974-0fa21e4163b5,},Annotations:map[string]string{io.kubernetes.container.hash: f14d61b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0,PodSandboxId:a50d015125505af75e46e47e5841cdc1fb38d1c9294e475412ad37859f3db02b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720047921587819180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ac71ae3fd52dff19d913e1a274c990,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb,PodSandboxId:bbcc0c1ac639035aba223e09dde6b1ea8333747b6a348a6598ba51cac974b94c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1720047921542561527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-856893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7891a98db30710828591ae5169d05ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c40225db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d284663-617f-42aa-b0c6-4840dac96c3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1be8f74847b6c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               3                   a0b2a60d87f5b       kindnet-h7ntk
	627192b28b0d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   05ac5d24180a3       storage-provisioner
	2d4d662ed3e9a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   2                   0e28d529014bf       kube-controller-manager-ha-856893
	87914b1cd6875       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            3                   4c44635e56bb3       kube-apiserver-ha-856893
	0abb9e5fc1177       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   3cea8a5014a03       busybox-fc5497c4f-hh5rx
	7d081acbf6ada       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   0d1c5fedc17c7       kube-vip-ha-856893
	992e4d3007ac0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                1                   41bc64df47de6       kube-proxy-52zqj
	e4fc2ed85817c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   05ac5d24180a3       storage-provisioner
	8f7a62bdd1c0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   620e31e027610       coredns-7db6d8ff4d-pwqfl
	9da7c56e33b64       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      5 minutes ago       Exited              kube-controller-manager   1                   0e28d529014bf       kube-controller-manager-ha-856893
	747282699f82e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      5 minutes ago       Running             kube-scheduler            1                   f4d9c612a69e5       kube-scheduler-ha-856893
	96edf619f58bb       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      5 minutes ago       Exited              kube-apiserver            2                   4c44635e56bb3       kube-apiserver-ha-856893
	6565008c19a06       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   6192fa5bbb48f       etcd-ha-856893
	173dd7f93a702       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               2                   a0b2a60d87f5b       kindnet-h7ntk
	072278e64e9ff       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   f346b1aac5151       coredns-7db6d8ff4d-n5tdf
	2d5f2f09a864e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   2add57c6feb6d       busybox-fc5497c4f-hh5rx
	4b327b3ea68a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   52adb03e9908b       coredns-7db6d8ff4d-n5tdf
	ebac8426f222e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   75824b8079291       coredns-7db6d8ff4d-pwqfl
	aea86e5699e84       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      15 minutes ago      Exited              kube-proxy                0                   17315e93de095       kube-proxy-52zqj
	8ed8443e8784d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      15 minutes ago      Exited              kube-scheduler            0                   a50d015125505       kube-scheduler-ha-856893
	194253df10dfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago      Exited              etcd                      0                   bbcc0c1ac6390       etcd-ha-856893
	
	
	==> coredns [072278e64e9ff93bc5bf83fdd6dc644e6b9a08398aa287f47dec28d0ba3a75dc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1108679960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:05.034) (total time: 10001ms):
	Trace[1108679960]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (23:16:15.035)
	Trace[1108679960]: [10.001078406s] [10.001078406s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1359473294]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:05.230) (total time: 10001ms):
	Trace[1359473294]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:16:15.231)
	Trace[1359473294]: [10.001638601s] [10.001638601s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51412->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51412->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4b327b3ea68a52e30d966b0deae5defedef40536fd436403b9819aa15158bc54] <==
	[INFO] 10.244.1.2:43589 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174137s
	[INFO] 10.244.1.2:49376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106729s
	[INFO] 10.244.1.2:51691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271033s
	[INFO] 10.244.2.2:40310 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117383s
	[INFO] 10.244.2.2:38408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011442s
	[INFO] 10.244.2.2:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080741s
	[INFO] 10.244.0.4:60751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020875s
	[INFO] 10.244.0.4:42746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083559s
	[INFO] 10.244.1.2:46618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026488s
	[INFO] 10.244.1.2:46816 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095128s
	[INFO] 10.244.2.2:35755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141347s
	[INFO] 10.244.2.2:37226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000441904s
	[INFO] 10.244.2.2:56990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123934s
	[INFO] 10.244.0.4:33260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228783s
	[INFO] 10.244.0.4:40825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089557s
	[INFO] 10.244.0.4:36029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284159s
	[INFO] 10.244.0.4:38025 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069908s
	[INFO] 10.244.1.2:33505 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000516657s
	[INFO] 10.244.1.2:51760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106766s
	[INFO] 10.244.1.2:48924 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111713s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8f7a62bdd1c0dc302a512bdf5f0b5e614f79e27d3f9ab960b64caa22c065bf44] <==
	Trace[893896800]: [10.001675331s] [10.001675331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42006->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42006->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[832467613]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jul-2024 23:16:06.594) (total time: 11584ms):
	Trace[832467613]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer 11584ms (23:16:18.178)
	Trace[832467613]: [11.5841334s] [11.5841334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41976->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42000->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42000->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ebac8426f222e4bea56c2375a72cbf93a3a71429ca80db7d4f5f118a2af02691] <==
	[INFO] 10.244.1.2:38357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235864s
	[INFO] 10.244.1.2:52654 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000207162s
	[INFO] 10.244.2.2:38149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003994489s
	[INFO] 10.244.2.2:37323 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162805s
	[INFO] 10.244.2.2:37370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170597s
	[INFO] 10.244.0.4:39154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140397s
	[INFO] 10.244.0.4:39807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002148429s
	[INFO] 10.244.0.4:52421 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189952s
	[INFO] 10.244.0.4:32927 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001716905s
	[INFO] 10.244.0.4:37077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064503s
	[INFO] 10.244.1.2:53622 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138056s
	[INFO] 10.244.1.2:56863 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001413025s
	[INFO] 10.244.1.2:33669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000289179s
	[INFO] 10.244.2.2:46390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141967s
	[INFO] 10.244.0.4:47937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126136s
	[INFO] 10.244.0.4:40258 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058689s
	[INFO] 10.244.1.2:34579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112137s
	[INFO] 10.244.1.2:43318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087441s
	[INFO] 10.244.2.2:44839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154015s
	[INFO] 10.244.1.2:49628 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158345s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-856893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_05_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:21:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:16:40 +0000   Wed, 03 Jul 2024 23:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-856893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26831b612bd459ca285f71afd0636da
	  System UUID:                a26831b6-12bd-459c-a285-f71afd0636da
	  Boot ID:                    60d1e076-9358-4d45-bf73-662df78ab1a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hh5rx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-n5tdf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-pwqfl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-856893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-h7ntk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-856893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-856893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-52zqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-856893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-856893                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m30s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-856893 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-856893 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-856893 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                    node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-856893 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Warning  ContainerGCFailed        5m40s (x2 over 6m40s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-856893 event: Registered Node ha-856893 in Controller
	
	
	Name:               ha-856893-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_06_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:06:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:21:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:19:29 +0000   Wed, 03 Jul 2024 23:19:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:19:29 +0000   Wed, 03 Jul 2024 23:19:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:19:29 +0000   Wed, 03 Jul 2024 23:19:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:19:29 +0000   Wed, 03 Jul 2024 23:19:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-856893-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 109978f2ea4c4f42a5d187826750c850
	  System UUID:                109978f2-ea4c-4f42-a5d1-87826750c850
	  Boot ID:                    5864d854-b931-4a7e-9c19-740d9ee37c4c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n7rvj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-856893-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-rwqsq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-856893-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-856893-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-gkwrn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-856893-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-856893-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-856893-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-856893-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-856893-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-856893-m02 event: Registered Node ha-856893-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-856893-m02 status is now: NodeNotReady
	
	
	Name:               ha-856893-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-856893-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=ha-856893
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:09:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-856893-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:18:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:19:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:19:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:19:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 03 Jul 2024 23:18:19 +0000   Wed, 03 Jul 2024 23:19:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-856893-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3705f72ac66415f90e310971654b6b5
	  System UUID:                f3705f72-ac66-415f-90e3-10971654b6b5
	  Boot ID:                    2ac337d1-652f-40d7-872b-674efdefff16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jkptf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-5kksq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-brfsv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-856893-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-856893-m04 event: Registered Node ha-856893-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-856893-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-856893-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-856893-m04 has been rebooted, boot id: 2ac337d1-652f-40d7-872b-674efdefff16
	  Normal   NodeReady                2m48s                  kubelet          Node ha-856893-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m42s)   node-controller  Node ha-856893-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.908066] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.058276] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065122] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.220079] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.126395] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.300940] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.506884] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.061467] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.368826] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.919640] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.254448] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +6.249182] kauditd_printk_skb: 23 callbacks suppressed
	[Jul 3 23:06] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.915119] kauditd_printk_skb: 24 callbacks suppressed
	[Jul 3 23:12] kauditd_printk_skb: 1 callbacks suppressed
	[Jul 3 23:15] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.164114] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.192920] systemd-fstab-generator[3843]: Ignoring "noauto" option for root device
	[  +0.142672] systemd-fstab-generator[3856]: Ignoring "noauto" option for root device
	[  +0.319393] systemd-fstab-generator[3884]: Ignoring "noauto" option for root device
	[  +0.822708] systemd-fstab-generator[3984]: Ignoring "noauto" option for root device
	[  +4.721060] kauditd_printk_skb: 142 callbacks suppressed
	[Jul 3 23:16] kauditd_printk_skb: 65 callbacks suppressed
	[ +10.072630] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.376413] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [194253df10dfc5a8f67cd38343e633ac442a1e451aecc494bca0cce654a98ecb] <==
	{"level":"warn","ts":"2024-07-03T23:14:16.373818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.178602542s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-03T23:14:16.394654Z","caller":"traceutil/trace.go:171","msg":"trace[98229140] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"7.199443125s","start":"2024-07-03T23:14:09.195205Z","end":"2024-07-03T23:14:16.394649Z","steps":["trace[98229140] 'agreement among raft nodes before linearized reading'  (duration: 7.17860713s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:14:16.394681Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T23:14:09.195199Z","time spent":"7.199470157s","remote":"127.0.0.1:33614","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	2024/07/03 23:14:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-03T23:14:16.373949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.242117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-03T23:14:16.394927Z","caller":"traceutil/trace.go:171","msg":"trace[1390875968] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"189.286197ms","start":"2024-07-03T23:14:16.205635Z","end":"2024-07-03T23:14:16.394921Z","steps":["trace[1390875968] 'agreement among raft nodes before linearized reading'  (duration: 168.246533ms)"],"step_count":1}
	2024/07/03 23:14:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-03T23:14:16.664314Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bbf1bb039b0d3451","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-03T23:14:16.664586Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664641Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664672Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.664901Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.66514Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c61891c2e847e46e"}
	{"level":"info","ts":"2024-07-03T23:14:16.665155Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665165Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66518Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665214Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66524Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665322Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.665337Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:14:16.66845Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2024-07-03T23:14:16.668612Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2024-07-03T23:14:16.668642Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-856893","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"]}
	
	
	==> etcd [6565008c19a06a217526afd54d631c504d22707bac4b76078dae4c661fd6dba5] <==
	{"level":"warn","ts":"2024-07-03T23:17:30.698044Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"81b95bbe226332d2","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-03T23:17:34.602163Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.616189Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.616292Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.617595Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bbf1bb039b0d3451","to":"81b95bbe226332d2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-03T23:17:34.617781Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:17:34.64127Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bbf1bb039b0d3451","to":"81b95bbe226332d2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-03T23:17:34.641331Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.250318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 switched to configuration voters=(13542811178640421969 14274319285257495662)"}
	{"level":"info","ts":"2024-07-03T23:18:33.253067Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","removed-remote-peer-id":"81b95bbe226332d2","removed-remote-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-07-03T23:18:33.253167Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.253623Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.253684Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.254034Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.254115Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.254204Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.254425Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","error":"context canceled"}
	{"level":"warn","ts":"2024-07-03T23:18:33.254584Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"81b95bbe226332d2","error":"failed to read 81b95bbe226332d2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-03T23:18:33.254638Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.254878Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2","error":"context canceled"}
	{"level":"info","ts":"2024-07-03T23:18:33.25494Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bbf1bb039b0d3451","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.255069Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"81b95bbe226332d2"}
	{"level":"info","ts":"2024-07-03T23:18:33.255114Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"bbf1bb039b0d3451","removed-remote-peer-id":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.266645Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bbf1bb039b0d3451","remote-peer-id-stream-handler":"bbf1bb039b0d3451","remote-peer-id-from":"81b95bbe226332d2"}
	{"level":"warn","ts":"2024-07-03T23:18:33.276703Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bbf1bb039b0d3451","remote-peer-id-stream-handler":"bbf1bb039b0d3451","remote-peer-id-from":"81b95bbe226332d2"}
	
	
	==> kernel <==
	 23:21:08 up 16 min,  0 users,  load average: 0.37, 0.40, 0.27
	Linux ha-856893 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221] <==
	I0703 23:15:50.758361       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0703 23:15:50.758446       1 main.go:107] hostIP = 192.168.39.172
	podIP = 192.168.39.172
	I0703 23:15:50.758607       1 main.go:116] setting mtu 1500 for CNI 
	I0703 23:15:50.758647       1 main.go:146] kindnetd IP family: "ipv4"
	I0703 23:15:50.758681       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0703 23:15:56.674585       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:15:59.746340       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:16:10.748273       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0703 23:16:15.106252       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0703 23:16:18.178269       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [1be8f74847b6ca4e08bf065013d7bbb19b70bd700378541102b8759dbc247721] <==
	I0703 23:20:23.962895       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:20:33.973976       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:20:33.974257       1 main.go:227] handling current node
	I0703 23:20:33.974528       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:20:33.974945       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:20:33.975136       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:20:33.975170       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:20:43.984569       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:20:43.984627       1 main.go:227] handling current node
	I0703 23:20:43.984642       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:20:43.984651       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:20:43.984956       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:20:43.984985       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:20:53.997955       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:20:53.997994       1 main.go:227] handling current node
	I0703 23:20:53.998005       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:20:53.998010       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:20:53.998123       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:20:53.998146       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	I0703 23:21:04.003520       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0703 23:21:04.003559       1 main.go:227] handling current node
	I0703 23:21:04.003570       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0703 23:21:04.003580       1 main.go:250] Node ha-856893-m02 has CIDR [10.244.1.0/24] 
	I0703 23:21:04.003684       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I0703 23:21:04.003704       1 main.go:250] Node ha-856893-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [87914b1cd68757fa6aaf994adb433fc0bb354e69bbe1fa2f82078ba32780c470] <==
	I0703 23:16:34.685943       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:16:34.686394       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0703 23:16:34.686477       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0703 23:16:34.765542       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:16:34.766947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 23:16:34.767637       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 23:16:34.767917       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 23:16:34.769120       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 23:16:34.769586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:16:34.774311       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0703 23:16:34.782693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186]
	I0703 23:16:34.786827       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:16:34.786904       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:16:34.786929       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:16:34.786935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:16:34.786940       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:16:34.797503       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:16:34.803042       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:16:34.803084       1 policy_source.go:224] refreshing policies
	I0703 23:16:34.867710       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:16:34.885546       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 23:16:34.900698       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0703 23:16:34.906811       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0703 23:16:35.671330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0703 23:16:36.239641       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.157 192.168.39.172 192.168.39.186]
	
	
	==> kube-apiserver [96edf619f58bbc96c41a9ffef8462ac9205d3b92cdfc23aeaa0c8d46758b7a55] <==
	I0703 23:15:55.437420       1 options.go:221] external host was not specified, using 192.168.39.172
	I0703 23:15:55.438459       1 server.go:148] Version: v1.30.2
	I0703 23:15:55.438545       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:15:55.783848       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0703 23:15:55.797216       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0703 23:15:55.797264       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0703 23:15:55.797477       1 instance.go:299] Using reconciler: lease
	I0703 23:15:55.797919       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0703 23:16:15.779101       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0703 23:16:15.780795       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0703 23:16:15.798882       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2d4d662ed3e9a4b08a5956a3a96261534cd121ffa15a7cab0ea8a6848e762a48] <==
	I0703 23:19:21.014054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.631256ms"
	I0703 23:19:21.016980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.762µs"
	I0703 23:19:26.237175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972485ms"
	I0703 23:19:26.237418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.032µs"
	E0703 23:19:27.900835       1 gc_controller.go:153] "Failed to get node" err="node \"ha-856893-m03\" not found" logger="pod-garbage-collector-controller" node="ha-856893-m03"
	E0703 23:19:27.900932       1 gc_controller.go:153] "Failed to get node" err="node \"ha-856893-m03\" not found" logger="pod-garbage-collector-controller" node="ha-856893-m03"
	E0703 23:19:27.900978       1 gc_controller.go:153] "Failed to get node" err="node \"ha-856893-m03\" not found" logger="pod-garbage-collector-controller" node="ha-856893-m03"
	E0703 23:19:27.901002       1 gc_controller.go:153] "Failed to get node" err="node \"ha-856893-m03\" not found" logger="pod-garbage-collector-controller" node="ha-856893-m03"
	E0703 23:19:27.901025       1 gc_controller.go:153] "Failed to get node" err="node \"ha-856893-m03\" not found" logger="pod-garbage-collector-controller" node="ha-856893-m03"
	I0703 23:19:27.920084       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-856893-m03"
	I0703 23:19:27.957548       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-856893-m03"
	I0703 23:19:27.958040       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-856893-m03"
	I0703 23:19:27.999913       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-856893-m03"
	I0703 23:19:28.000023       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-856893-m03"
	I0703 23:19:28.026009       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-856893-m03"
	I0703 23:19:28.026047       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-856893-m03"
	I0703 23:19:28.057917       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-856893-m03"
	I0703 23:19:28.058046       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-856893-m03"
	I0703 23:19:28.085997       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-856893-m03"
	I0703 23:19:28.086106       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-stq26"
	I0703 23:19:28.123617       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-stq26"
	I0703 23:19:28.123726       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vtd2b"
	I0703 23:19:28.157056       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vtd2b"
	I0703 23:19:33.697150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.8243ms"
	I0703 23:19:33.697802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.838µs"
	
	
	==> kube-controller-manager [9da7c56e33b648295a6b9d0c247d42a7702afe66d13bc1b29feac239268ac3cd] <==
	I0703 23:15:56.070479       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:15:56.367195       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0703 23:15:56.367287       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:15:56.369044       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0703 23:15:56.369240       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:15:56.369262       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0703 23:15:56.369281       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0703 23:16:16.810059       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.172:8443/healthz\": dial tcp 192.168.39.172:8443: connect: connection refused"
	
	
	==> kube-proxy [992e4d3007ac016a3cd12a9c3eb6b83b01372ce8e63c6417d7165881667e6514] <==
	I0703 23:15:56.419245       1 server_linux.go:69] "Using iptables proxy"
	E0703 23:15:56.866965       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:15:59.938830       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:03.010438       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:09.155261       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0703 23:16:18.371135       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-856893\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0703 23:16:37.234065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	I0703 23:16:37.281783       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:16:37.282091       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:16:37.282200       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:16:37.286170       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:16:37.286435       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:16:37.288971       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:16:37.291980       1 config.go:192] "Starting service config controller"
	I0703 23:16:37.292030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:16:37.292179       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:16:37.292235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:16:37.294266       1 config.go:319] "Starting node config controller"
	I0703 23:16:37.294356       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:16:37.393168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:16:37.393273       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:16:37.394654       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [aea86e5699e84e6bf04e2fb26a3d61b909e256629a39ff37a39be04ac08dc599] <==
	E0703 23:13:04.387248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:07.458158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:07.458614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:14.051464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:14.051481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:23.266268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:23.267260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:23.267188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:23.267799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:26.339320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:26.339470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:44.770394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:44.770466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-856893&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:47.843140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:47.843454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1744": dial tcp 192.168.39.254:8443: connect: no route to host
	W0703 23:13:50.916009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	E0703 23:13:50.916291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1746": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [747282699f82e41871309af2d2b108b8c1fe8fc99d8fd905bf18d76b80d43f2a] <==
	W0703 23:16:26.798389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.172:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:26.798456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.172:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:31.348405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.172:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:31.348471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.172:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:31.721619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.172:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:31.721684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.172:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.106383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.172:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.106464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.172:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.387135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.172:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.387296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.172:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.473071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.172:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.473167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.172:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:32.629405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	E0703 23:16:32.629507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.172:8443: connect: connection refused
	W0703 23:16:34.708202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:16:34.708256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:16:34.708340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:16:34.708371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 23:16:34.708455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:16:34.708482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0703 23:16:35.015836       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:18:29.883449       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jkptf\": pod busybox-fc5497c4f-jkptf is already assigned to node \"ha-856893-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-jkptf" node="ha-856893-m04"
	E0703 23:18:29.883720       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1a47f5d7-7c7f-41d8-bfa6-b4a6fc775ce0(default/busybox-fc5497c4f-jkptf) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-jkptf"
	E0703 23:18:29.883811       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jkptf\": pod busybox-fc5497c4f-jkptf is already assigned to node \"ha-856893-m04\"" pod="default/busybox-fc5497c4f-jkptf"
	I0703 23:18:29.883866       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-jkptf" node="ha-856893-m04"
	
	
	==> kube-scheduler [8ed8443e8784d7b7cd6aa847208005eef048cebcdcd17bdbcb21ad48bfe77df0] <==
	W0703 23:14:09.917124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:09.917225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:09.918153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:09.918228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:10.388475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0703 23:14:10.388617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0703 23:14:10.781368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:14:10.781446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:14:11.115685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0703 23:14:11.115788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0703 23:14:11.134607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 23:14:11.134655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 23:14:11.226494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:14:11.226612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:14:11.400631       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:14:11.400782       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:14:11.769304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 23:14:11.769531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 23:14:11.930228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:14:11.930441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:14:12.215475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:14:12.215603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:14:15.995916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:14:15.995946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:14:16.367428       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 03 23:17:11 ha-856893 kubelet[1363]: I0703 23:17:11.448019    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-hh5rx" podStartSLOduration=525.828397226 podStartE2EDuration="8m48.447960314s" podCreationTimestamp="2024-07-03 23:08:23 +0000 UTC" firstStartedPulling="2024-07-03 23:08:24.51916307 +0000 UTC m=+176.932471051" lastFinishedPulling="2024-07-03 23:08:27.138726159 +0000 UTC m=+179.552034139" observedRunningTime="2024-07-03 23:08:27.5075502 +0000 UTC m=+179.920858172" watchObservedRunningTime="2024-07-03 23:17:11.447960314 +0000 UTC m=+703.861268302"
	Jul 03 23:17:12 ha-856893 kubelet[1363]: I0703 23:17:12.734265    1363 scope.go:117] "RemoveContainer" containerID="173dd7f93a7021f318f479bfa427c3e34c0b7f575d2ce5b0c02ab23c1356d221"
	Jul 03 23:17:27 ha-856893 kubelet[1363]: E0703 23:17:27.750426    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:17:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:17:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:17:28 ha-856893 kubelet[1363]: I0703 23:17:28.734517    1363 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-856893" podUID="0c4a20fd-99f2-4d6a-a332-2a79e4431b88"
	Jul 03 23:17:28 ha-856893 kubelet[1363]: I0703 23:17:28.750943    1363 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-856893"
	Jul 03 23:17:37 ha-856893 kubelet[1363]: I0703 23:17:37.755560    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-856893" podStartSLOduration=9.755479054 podStartE2EDuration="9.755479054s" podCreationTimestamp="2024-07-03 23:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-03 23:17:37.755197984 +0000 UTC m=+730.168505972" watchObservedRunningTime="2024-07-03 23:17:37.755479054 +0000 UTC m=+730.168787043"
	Jul 03 23:18:27 ha-856893 kubelet[1363]: E0703 23:18:27.769815    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:18:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:18:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:19:27 ha-856893 kubelet[1363]: E0703 23:19:27.750493    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:19:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:19:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:19:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:19:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:20:27 ha-856893 kubelet[1363]: E0703 23:20:27.751931    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:20:27 ha-856893 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:20:27 ha-856893 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:20:27 ha-856893 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:20:27 ha-856893 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:21:06.831308   35794 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18998-9396/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-856893 -n ha-856893
helpers_test.go:261: (dbg) Run:  kubectl --context ha-856893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-184661
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-184661
E0703 23:36:17.055213   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-184661: exit status 82 (2m2.001873475s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-184661-m03"  ...
	* Stopping node "multinode-184661-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-184661" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-184661 --wait=true -v=8 --alsologtostderr
E0703 23:38:57.357867   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-184661 --wait=true -v=8 --alsologtostderr: (3m1.46542412s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-184661
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-184661 -n multinode-184661
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 logs -n 25: (1.562494012s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661:/home/docker/cp-test_multinode-184661-m02_multinode-184661.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661 sudo cat                                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m02_multinode-184661.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03:/home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661-m03 sudo cat                                   | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp testdata/cp-test.txt                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661:/home/docker/cp-test_multinode-184661-m03_multinode-184661.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661 sudo cat                                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m03_multinode-184661.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02:/home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661-m02 sudo cat                                   | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-184661 node stop m03                                                          | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:35 UTC |
	| node    | multinode-184661 node start                                                             | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC | 03 Jul 24 23:35 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-184661                                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC |                     |
	| stop    | -p multinode-184661                                                                     | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC |                     |
	| start   | -p multinode-184661                                                                     | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:37 UTC | 03 Jul 24 23:40 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-184661                                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:40 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:37:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:37:32.564597   45138 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:37:32.564765   45138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:37:32.564776   45138 out.go:304] Setting ErrFile to fd 2...
	I0703 23:37:32.564783   45138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:37:32.564975   45138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:37:32.565557   45138 out.go:298] Setting JSON to false
	I0703 23:37:32.566502   45138 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4793,"bootTime":1720045060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:37:32.566578   45138 start.go:139] virtualization: kvm guest
	I0703 23:37:32.568902   45138 out.go:177] * [multinode-184661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:37:32.570497   45138 notify.go:220] Checking for updates...
	I0703 23:37:32.570519   45138 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:37:32.572030   45138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:37:32.573423   45138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:37:32.574765   45138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:37:32.576061   45138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:37:32.577207   45138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:37:32.578687   45138 config.go:182] Loaded profile config "multinode-184661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:37:32.578821   45138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:37:32.579300   45138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:37:32.579385   45138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:37:32.595960   45138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0703 23:37:32.596395   45138 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:37:32.596932   45138 main.go:141] libmachine: Using API Version  1
	I0703 23:37:32.596953   45138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:37:32.597330   45138 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:37:32.597516   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.633998   45138 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:37:32.635045   45138 start.go:297] selected driver: kvm2
	I0703 23:37:32.635069   45138 start.go:901] validating driver "kvm2" against &{Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:37:32.635241   45138 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:37:32.635668   45138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:37:32.635752   45138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:37:32.651587   45138 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:37:32.652565   45138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:37:32.652658   45138 cni.go:84] Creating CNI manager for ""
	I0703 23:37:32.652675   45138 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0703 23:37:32.652752   45138 start.go:340] cluster config:
	{Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:37:32.652939   45138 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:37:32.655313   45138 out.go:177] * Starting "multinode-184661" primary control-plane node in "multinode-184661" cluster
	I0703 23:37:32.656500   45138 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:37:32.656539   45138 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:37:32.656549   45138 cache.go:56] Caching tarball of preloaded images
	I0703 23:37:32.656644   45138 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:37:32.656660   45138 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:37:32.656780   45138 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/config.json ...
	I0703 23:37:32.656986   45138 start.go:360] acquireMachinesLock for multinode-184661: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:37:32.657029   45138 start.go:364] duration metric: took 24.01µs to acquireMachinesLock for "multinode-184661"
	I0703 23:37:32.657047   45138 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:37:32.657064   45138 fix.go:54] fixHost starting: 
	I0703 23:37:32.657317   45138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:37:32.657351   45138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:37:32.671899   45138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34057
	I0703 23:37:32.672307   45138 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:37:32.672740   45138 main.go:141] libmachine: Using API Version  1
	I0703 23:37:32.672760   45138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:37:32.673017   45138 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:37:32.673178   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.673321   45138 main.go:141] libmachine: (multinode-184661) Calling .GetState
	I0703 23:37:32.674764   45138 fix.go:112] recreateIfNeeded on multinode-184661: state=Running err=<nil>
	W0703 23:37:32.674794   45138 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:37:32.677323   45138 out.go:177] * Updating the running kvm2 "multinode-184661" VM ...
	I0703 23:37:32.678737   45138 machine.go:94] provisionDockerMachine start ...
	I0703 23:37:32.678768   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.678986   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.681466   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.681924   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.681948   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.682211   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.682388   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.682549   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.682668   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.682804   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.683029   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.683042   45138 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:37:32.797416   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-184661
	
	I0703 23:37:32.797455   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:32.797709   45138 buildroot.go:166] provisioning hostname "multinode-184661"
	I0703 23:37:32.797739   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:32.797917   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.800595   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.801036   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.801065   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.801238   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.801431   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.801600   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.801729   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.801922   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.802117   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.802131   45138 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-184661 && echo "multinode-184661" | sudo tee /etc/hostname
	I0703 23:37:32.927700   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-184661
	
	I0703 23:37:32.927743   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.930800   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.931270   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.931317   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.931459   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.931651   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.931842   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.931984   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.932144   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.932314   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.932328   45138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-184661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-184661/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-184661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:37:33.049416   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:37:33.049452   45138 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:37:33.049484   45138 buildroot.go:174] setting up certificates
	I0703 23:37:33.049495   45138 provision.go:84] configureAuth start
	I0703 23:37:33.049510   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:33.049757   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:37:33.052587   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.052928   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.052957   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.053061   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.055008   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.055321   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.055353   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.055555   45138 provision.go:143] copyHostCerts
	I0703 23:37:33.055583   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:37:33.055639   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:37:33.055652   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:37:33.055737   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:37:33.055847   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:37:33.055869   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:37:33.055893   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:37:33.055942   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:37:33.056010   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:37:33.056026   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:37:33.056033   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:37:33.056056   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:37:33.056145   45138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.multinode-184661 san=[127.0.0.1 192.168.39.57 localhost minikube multinode-184661]
	I0703 23:37:33.200798   45138 provision.go:177] copyRemoteCerts
	I0703 23:37:33.200852   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:37:33.200873   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.203311   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.203679   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.203721   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.203846   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:33.204033   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.204189   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:33.204386   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:37:33.291213   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:37:33.291302   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:37:33.319234   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:37:33.319305   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0703 23:37:33.346178   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:37:33.346264   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:37:33.374256   45138 provision.go:87] duration metric: took 324.746808ms to configureAuth
	I0703 23:37:33.374285   45138 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:37:33.374502   45138 config.go:182] Loaded profile config "multinode-184661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:37:33.374563   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.377364   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.377768   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.377813   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.377986   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:33.378198   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.378362   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.378506   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:33.378648   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:33.378806   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:33.378821   45138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:39:04.085884   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:39:04.085909   45138 machine.go:97] duration metric: took 1m31.407157506s to provisionDockerMachine
	I0703 23:39:04.085927   45138 start.go:293] postStartSetup for "multinode-184661" (driver="kvm2")
	I0703 23:39:04.085939   45138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:39:04.085961   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.086295   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:39:04.086327   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.089431   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.089899   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.089925   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.090104   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.090337   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.090502   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.090645   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.180892   45138 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:39:04.185571   45138 command_runner.go:130] > NAME=Buildroot
	I0703 23:39:04.185595   45138 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0703 23:39:04.185600   45138 command_runner.go:130] > ID=buildroot
	I0703 23:39:04.185607   45138 command_runner.go:130] > VERSION_ID=2023.02.9
	I0703 23:39:04.185615   45138 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0703 23:39:04.185674   45138 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:39:04.185700   45138 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:39:04.185760   45138 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:39:04.185835   45138 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:39:04.185846   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:39:04.185922   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:39:04.196354   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:39:04.223891   45138 start.go:296] duration metric: took 137.935599ms for postStartSetup
	I0703 23:39:04.223940   45138 fix.go:56] duration metric: took 1m31.566881589s for fixHost
	I0703 23:39:04.223961   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.226621   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.227121   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.227165   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.227392   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.227611   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.227794   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.227944   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.228104   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:39:04.228307   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:39:04.228323   45138 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:39:04.341012   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720049944.319080588
	
	I0703 23:39:04.341029   45138 fix.go:216] guest clock: 1720049944.319080588
	I0703 23:39:04.341036   45138 fix.go:229] Guest: 2024-07-03 23:39:04.319080588 +0000 UTC Remote: 2024-07-03 23:39:04.223944588 +0000 UTC m=+91.695090994 (delta=95.136ms)
	I0703 23:39:04.341061   45138 fix.go:200] guest clock delta is within tolerance: 95.136ms
	I0703 23:39:04.341067   45138 start.go:83] releasing machines lock for "multinode-184661", held for 1m31.684027144s
	I0703 23:39:04.341094   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.341373   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:39:04.343745   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.344117   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.344137   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.344347   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.344827   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.345002   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.345098   45138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:39:04.345126   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.345176   45138 ssh_runner.go:195] Run: cat /version.json
	I0703 23:39:04.345199   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.347741   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348075   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.348104   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348122   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348252   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.348430   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.348615   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.348636   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348638   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.348796   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.348795   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.348967   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.349100   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.349272   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.457815   45138 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0703 23:39:04.457865   45138 command_runner.go:130] > {"iso_version": "v1.33.1-1719929171-19175", "kicbase_version": "v0.0.44-1719600828-19153", "minikube_version": "v1.33.1", "commit": "0ba4fd2d2d09aa0a2e53d6947bc1076c219d88c0"}
	I0703 23:39:04.458017   45138 ssh_runner.go:195] Run: systemctl --version
	I0703 23:39:04.464261   45138 command_runner.go:130] > systemd 252 (252)
	I0703 23:39:04.464303   45138 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0703 23:39:04.464659   45138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:39:04.639192   45138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0703 23:39:04.645690   45138 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0703 23:39:04.645834   45138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:39:04.645894   45138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:39:04.656022   45138 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 23:39:04.656052   45138 start.go:494] detecting cgroup driver to use...
	I0703 23:39:04.656122   45138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:39:04.673655   45138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:39:04.689504   45138 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:39:04.689561   45138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:39:04.705030   45138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:39:04.720290   45138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:39:04.872473   45138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:39:05.017221   45138 docker.go:233] disabling docker service ...
	I0703 23:39:05.017287   45138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:39:05.034393   45138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:39:05.074892   45138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:39:05.221361   45138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:39:05.366363   45138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:39:05.381629   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:39:05.403117   45138 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0703 23:39:05.403663   45138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:39:05.403726   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.415804   45138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:39:05.415870   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.427027   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.438396   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.449771   45138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:39:05.461346   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.472698   45138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.485862   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.497292   45138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:39:05.507707   45138 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0703 23:39:05.508021   45138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:39:05.518637   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:39:05.663416   45138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:39:07.192662   45138 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.529206076s)
	I0703 23:39:07.192699   45138 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:39:07.192752   45138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:39:07.198235   45138 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0703 23:39:07.198268   45138 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0703 23:39:07.198278   45138 command_runner.go:130] > Device: 0,22	Inode: 1337        Links: 1
	I0703 23:39:07.198288   45138 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0703 23:39:07.198296   45138 command_runner.go:130] > Access: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198316   45138 command_runner.go:130] > Modify: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198328   45138 command_runner.go:130] > Change: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198333   45138 command_runner.go:130] >  Birth: -
	I0703 23:39:07.198370   45138 start.go:562] Will wait 60s for crictl version
	I0703 23:39:07.198426   45138 ssh_runner.go:195] Run: which crictl
	I0703 23:39:07.203041   45138 command_runner.go:130] > /usr/bin/crictl
	I0703 23:39:07.203206   45138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:39:07.239314   45138 command_runner.go:130] > Version:  0.1.0
	I0703 23:39:07.239346   45138 command_runner.go:130] > RuntimeName:  cri-o
	I0703 23:39:07.239354   45138 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0703 23:39:07.239452   45138 command_runner.go:130] > RuntimeApiVersion:  v1
	I0703 23:39:07.240857   45138 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:39:07.240934   45138 ssh_runner.go:195] Run: crio --version
	I0703 23:39:07.270903   45138 command_runner.go:130] > crio version 1.29.1
	I0703 23:39:07.270930   45138 command_runner.go:130] > Version:        1.29.1
	I0703 23:39:07.270939   45138 command_runner.go:130] > GitCommit:      unknown
	I0703 23:39:07.270945   45138 command_runner.go:130] > GitCommitDate:  unknown
	I0703 23:39:07.270952   45138 command_runner.go:130] > GitTreeState:   clean
	I0703 23:39:07.270961   45138 command_runner.go:130] > BuildDate:      2024-07-02T19:36:05Z
	I0703 23:39:07.270968   45138 command_runner.go:130] > GoVersion:      go1.21.6
	I0703 23:39:07.270974   45138 command_runner.go:130] > Compiler:       gc
	I0703 23:39:07.270982   45138 command_runner.go:130] > Platform:       linux/amd64
	I0703 23:39:07.270989   45138 command_runner.go:130] > Linkmode:       dynamic
	I0703 23:39:07.270996   45138 command_runner.go:130] > BuildTags:      
	I0703 23:39:07.271003   45138 command_runner.go:130] >   containers_image_ostree_stub
	I0703 23:39:07.271011   45138 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0703 23:39:07.271018   45138 command_runner.go:130] >   btrfs_noversion
	I0703 23:39:07.271025   45138 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0703 23:39:07.271036   45138 command_runner.go:130] >   libdm_no_deferred_remove
	I0703 23:39:07.271043   45138 command_runner.go:130] >   seccomp
	I0703 23:39:07.271050   45138 command_runner.go:130] > LDFlags:          unknown
	I0703 23:39:07.271059   45138 command_runner.go:130] > SeccompEnabled:   true
	I0703 23:39:07.271067   45138 command_runner.go:130] > AppArmorEnabled:  false
	I0703 23:39:07.272378   45138 ssh_runner.go:195] Run: crio --version
	I0703 23:39:07.315509   45138 command_runner.go:130] > crio version 1.29.1
	I0703 23:39:07.315538   45138 command_runner.go:130] > Version:        1.29.1
	I0703 23:39:07.315546   45138 command_runner.go:130] > GitCommit:      unknown
	I0703 23:39:07.315553   45138 command_runner.go:130] > GitCommitDate:  unknown
	I0703 23:39:07.315558   45138 command_runner.go:130] > GitTreeState:   clean
	I0703 23:39:07.315570   45138 command_runner.go:130] > BuildDate:      2024-07-02T19:36:05Z
	I0703 23:39:07.315577   45138 command_runner.go:130] > GoVersion:      go1.21.6
	I0703 23:39:07.315582   45138 command_runner.go:130] > Compiler:       gc
	I0703 23:39:07.315589   45138 command_runner.go:130] > Platform:       linux/amd64
	I0703 23:39:07.315595   45138 command_runner.go:130] > Linkmode:       dynamic
	I0703 23:39:07.315603   45138 command_runner.go:130] > BuildTags:      
	I0703 23:39:07.315610   45138 command_runner.go:130] >   containers_image_ostree_stub
	I0703 23:39:07.315617   45138 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0703 23:39:07.315627   45138 command_runner.go:130] >   btrfs_noversion
	I0703 23:39:07.315634   45138 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0703 23:39:07.315641   45138 command_runner.go:130] >   libdm_no_deferred_remove
	I0703 23:39:07.315648   45138 command_runner.go:130] >   seccomp
	I0703 23:39:07.315656   45138 command_runner.go:130] > LDFlags:          unknown
	I0703 23:39:07.315662   45138 command_runner.go:130] > SeccompEnabled:   true
	I0703 23:39:07.315669   45138 command_runner.go:130] > AppArmorEnabled:  false
	I0703 23:39:07.317981   45138 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:39:07.319291   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:39:07.322040   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:07.322441   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:07.322469   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:07.322797   45138 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:39:07.328055   45138 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0703 23:39:07.328422   45138 kubeadm.go:877] updating cluster {Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:39:07.328563   45138 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:39:07.328617   45138 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:39:07.364603   45138 command_runner.go:130] > {
	I0703 23:39:07.364630   45138 command_runner.go:130] >   "images": [
	I0703 23:39:07.364636   45138 command_runner.go:130] >     {
	I0703 23:39:07.364649   45138 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0703 23:39:07.364656   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364667   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0703 23:39:07.364673   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364679   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364709   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0703 23:39:07.364725   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0703 23:39:07.364731   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364738   45138 command_runner.go:130] >       "size": "65908273",
	I0703 23:39:07.364745   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.364752   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.364764   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.364770   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.364776   45138 command_runner.go:130] >     },
	I0703 23:39:07.364781   45138 command_runner.go:130] >     {
	I0703 23:39:07.364791   45138 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0703 23:39:07.364797   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364805   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0703 23:39:07.364811   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364818   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364830   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0703 23:39:07.364841   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0703 23:39:07.364848   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364856   45138 command_runner.go:130] >       "size": "1363676",
	I0703 23:39:07.364865   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.364877   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.364887   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.364895   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.364903   45138 command_runner.go:130] >     },
	I0703 23:39:07.364916   45138 command_runner.go:130] >     {
	I0703 23:39:07.364930   45138 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0703 23:39:07.364939   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364952   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0703 23:39:07.364962   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364971   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364989   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0703 23:39:07.365004   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0703 23:39:07.365013   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365020   45138 command_runner.go:130] >       "size": "31470524",
	I0703 23:39:07.365028   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365035   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365043   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365049   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365056   45138 command_runner.go:130] >     },
	I0703 23:39:07.365061   45138 command_runner.go:130] >     {
	I0703 23:39:07.365072   45138 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0703 23:39:07.365080   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365089   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0703 23:39:07.365098   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365103   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365116   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0703 23:39:07.365137   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0703 23:39:07.365145   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365151   45138 command_runner.go:130] >       "size": "61245718",
	I0703 23:39:07.365159   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365169   45138 command_runner.go:130] >       "username": "nonroot",
	I0703 23:39:07.365174   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365179   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365184   45138 command_runner.go:130] >     },
	I0703 23:39:07.365191   45138 command_runner.go:130] >     {
	I0703 23:39:07.365216   45138 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0703 23:39:07.365224   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365231   45138 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0703 23:39:07.365238   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365243   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365262   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0703 23:39:07.365274   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0703 23:39:07.365281   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365288   45138 command_runner.go:130] >       "size": "150779692",
	I0703 23:39:07.365295   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365304   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365313   45138 command_runner.go:130] >       },
	I0703 23:39:07.365321   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365329   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365337   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365344   45138 command_runner.go:130] >     },
	I0703 23:39:07.365349   45138 command_runner.go:130] >     {
	I0703 23:39:07.365366   45138 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0703 23:39:07.365376   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365388   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0703 23:39:07.365397   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365406   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365420   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0703 23:39:07.365434   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0703 23:39:07.365443   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365452   45138 command_runner.go:130] >       "size": "117609954",
	I0703 23:39:07.365461   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365469   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365477   45138 command_runner.go:130] >       },
	I0703 23:39:07.365485   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365494   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365503   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365511   45138 command_runner.go:130] >     },
	I0703 23:39:07.365519   45138 command_runner.go:130] >     {
	I0703 23:39:07.365530   45138 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0703 23:39:07.365539   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365547   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0703 23:39:07.365556   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365569   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365584   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0703 23:39:07.365598   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0703 23:39:07.365618   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365626   45138 command_runner.go:130] >       "size": "112194888",
	I0703 23:39:07.365630   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365638   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365644   45138 command_runner.go:130] >       },
	I0703 23:39:07.365652   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365661   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365670   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365679   45138 command_runner.go:130] >     },
	I0703 23:39:07.365687   45138 command_runner.go:130] >     {
	I0703 23:39:07.365698   45138 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0703 23:39:07.365706   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365715   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0703 23:39:07.365723   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365731   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365768   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0703 23:39:07.365783   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0703 23:39:07.365790   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365796   45138 command_runner.go:130] >       "size": "85953433",
	I0703 23:39:07.365804   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365809   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365815   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365821   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365825   45138 command_runner.go:130] >     },
	I0703 23:39:07.365829   45138 command_runner.go:130] >     {
	I0703 23:39:07.365839   45138 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0703 23:39:07.365844   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365851   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0703 23:39:07.365856   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365861   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365870   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0703 23:39:07.365879   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0703 23:39:07.365884   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365889   45138 command_runner.go:130] >       "size": "63051080",
	I0703 23:39:07.365894   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365900   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365912   45138 command_runner.go:130] >       },
	I0703 23:39:07.365921   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365929   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365936   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365944   45138 command_runner.go:130] >     },
	I0703 23:39:07.365949   45138 command_runner.go:130] >     {
	I0703 23:39:07.365960   45138 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0703 23:39:07.365968   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365977   45138 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0703 23:39:07.365985   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365991   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.366003   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0703 23:39:07.366017   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0703 23:39:07.366025   45138 command_runner.go:130] >       ],
	I0703 23:39:07.366032   45138 command_runner.go:130] >       "size": "750414",
	I0703 23:39:07.366041   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.366047   45138 command_runner.go:130] >         "value": "65535"
	I0703 23:39:07.366055   45138 command_runner.go:130] >       },
	I0703 23:39:07.366060   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.366069   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.366078   45138 command_runner.go:130] >       "pinned": true
	I0703 23:39:07.366096   45138 command_runner.go:130] >     }
	I0703 23:39:07.366103   45138 command_runner.go:130] >   ]
	I0703 23:39:07.366106   45138 command_runner.go:130] > }
	I0703 23:39:07.366703   45138 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:39:07.366720   45138 crio.go:433] Images already preloaded, skipping extraction
	I0703 23:39:07.366771   45138 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:39:07.401716   45138 command_runner.go:130] > {
	I0703 23:39:07.401742   45138 command_runner.go:130] >   "images": [
	I0703 23:39:07.401748   45138 command_runner.go:130] >     {
	I0703 23:39:07.401759   45138 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0703 23:39:07.401767   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401776   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0703 23:39:07.401781   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401786   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.401798   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0703 23:39:07.401811   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0703 23:39:07.401820   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401826   45138 command_runner.go:130] >       "size": "65908273",
	I0703 23:39:07.401834   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.401842   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.401860   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.401869   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.401876   45138 command_runner.go:130] >     },
	I0703 23:39:07.401881   45138 command_runner.go:130] >     {
	I0703 23:39:07.401894   45138 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0703 23:39:07.401903   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401909   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0703 23:39:07.401915   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401919   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.401928   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0703 23:39:07.401937   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0703 23:39:07.401942   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401946   45138 command_runner.go:130] >       "size": "1363676",
	I0703 23:39:07.401952   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.401960   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.401966   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.401970   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.401976   45138 command_runner.go:130] >     },
	I0703 23:39:07.401979   45138 command_runner.go:130] >     {
	I0703 23:39:07.401987   45138 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0703 23:39:07.401991   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401997   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0703 23:39:07.402012   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402018   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402037   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0703 23:39:07.402047   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0703 23:39:07.402051   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402055   45138 command_runner.go:130] >       "size": "31470524",
	I0703 23:39:07.402059   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.402065   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402069   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402073   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402077   45138 command_runner.go:130] >     },
	I0703 23:39:07.402080   45138 command_runner.go:130] >     {
	I0703 23:39:07.402086   45138 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0703 23:39:07.402093   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402098   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0703 23:39:07.402103   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402107   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402116   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0703 23:39:07.402131   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0703 23:39:07.402137   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402141   45138 command_runner.go:130] >       "size": "61245718",
	I0703 23:39:07.402147   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.402153   45138 command_runner.go:130] >       "username": "nonroot",
	I0703 23:39:07.402159   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402163   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402168   45138 command_runner.go:130] >     },
	I0703 23:39:07.402172   45138 command_runner.go:130] >     {
	I0703 23:39:07.402179   45138 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0703 23:39:07.402185   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402190   45138 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0703 23:39:07.402193   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402197   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402206   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0703 23:39:07.402215   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0703 23:39:07.402221   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402225   45138 command_runner.go:130] >       "size": "150779692",
	I0703 23:39:07.402301   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402632   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402647   45138 command_runner.go:130] >       },
	I0703 23:39:07.402651   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402656   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402660   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402663   45138 command_runner.go:130] >     },
	I0703 23:39:07.402666   45138 command_runner.go:130] >     {
	I0703 23:39:07.402677   45138 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0703 23:39:07.402683   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402691   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0703 23:39:07.402696   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402712   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402727   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0703 23:39:07.402748   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0703 23:39:07.402756   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402760   45138 command_runner.go:130] >       "size": "117609954",
	I0703 23:39:07.402767   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402772   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402781   45138 command_runner.go:130] >       },
	I0703 23:39:07.402788   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402803   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402810   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402819   45138 command_runner.go:130] >     },
	I0703 23:39:07.402825   45138 command_runner.go:130] >     {
	I0703 23:39:07.402836   45138 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0703 23:39:07.402851   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402863   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0703 23:39:07.402869   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402879   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402897   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0703 23:39:07.402913   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0703 23:39:07.402925   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402940   45138 command_runner.go:130] >       "size": "112194888",
	I0703 23:39:07.402948   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402954   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402981   45138 command_runner.go:130] >       },
	I0703 23:39:07.402991   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402998   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403013   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403022   45138 command_runner.go:130] >     },
	I0703 23:39:07.403028   45138 command_runner.go:130] >     {
	I0703 23:39:07.403041   45138 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0703 23:39:07.403050   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403064   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0703 23:39:07.403073   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403082   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403247   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0703 23:39:07.403300   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0703 23:39:07.403309   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403319   45138 command_runner.go:130] >       "size": "85953433",
	I0703 23:39:07.403329   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.403346   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403354   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403366   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403373   45138 command_runner.go:130] >     },
	I0703 23:39:07.403386   45138 command_runner.go:130] >     {
	I0703 23:39:07.403396   45138 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0703 23:39:07.403413   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403430   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0703 23:39:07.403436   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403443   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403461   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0703 23:39:07.403476   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0703 23:39:07.403485   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403496   45138 command_runner.go:130] >       "size": "63051080",
	I0703 23:39:07.403502   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.403509   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.403517   45138 command_runner.go:130] >       },
	I0703 23:39:07.403524   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403534   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403545   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403564   45138 command_runner.go:130] >     },
	I0703 23:39:07.403573   45138 command_runner.go:130] >     {
	I0703 23:39:07.403589   45138 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0703 23:39:07.403624   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403640   45138 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0703 23:39:07.403647   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403655   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403671   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0703 23:39:07.403695   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0703 23:39:07.403702   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403709   45138 command_runner.go:130] >       "size": "750414",
	I0703 23:39:07.403721   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.403736   45138 command_runner.go:130] >         "value": "65535"
	I0703 23:39:07.403746   45138 command_runner.go:130] >       },
	I0703 23:39:07.403754   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403764   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403775   45138 command_runner.go:130] >       "pinned": true
	I0703 23:39:07.403784   45138 command_runner.go:130] >     }
	I0703 23:39:07.403795   45138 command_runner.go:130] >   ]
	I0703 23:39:07.403801   45138 command_runner.go:130] > }
	I0703 23:39:07.404217   45138 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:39:07.404234   45138 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:39:07.404241   45138 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.30.2 crio true true} ...
	I0703 23:39:07.404351   45138 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-184661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:39:07.404413   45138 ssh_runner.go:195] Run: crio config
	I0703 23:39:07.448562   45138 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0703 23:39:07.448590   45138 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0703 23:39:07.448597   45138 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0703 23:39:07.448600   45138 command_runner.go:130] > #
	I0703 23:39:07.448608   45138 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0703 23:39:07.448613   45138 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0703 23:39:07.448619   45138 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0703 23:39:07.448631   45138 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0703 23:39:07.448635   45138 command_runner.go:130] > # reload'.
	I0703 23:39:07.448640   45138 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0703 23:39:07.448646   45138 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0703 23:39:07.448652   45138 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0703 23:39:07.448658   45138 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0703 23:39:07.448661   45138 command_runner.go:130] > [crio]
	I0703 23:39:07.448666   45138 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0703 23:39:07.448674   45138 command_runner.go:130] > # containers images, in this directory.
	I0703 23:39:07.448776   45138 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0703 23:39:07.448806   45138 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0703 23:39:07.448931   45138 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0703 23:39:07.448955   45138 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0703 23:39:07.449193   45138 command_runner.go:130] > # imagestore = ""
	I0703 23:39:07.449210   45138 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0703 23:39:07.449219   45138 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0703 23:39:07.449331   45138 command_runner.go:130] > storage_driver = "overlay"
	I0703 23:39:07.449348   45138 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0703 23:39:07.449357   45138 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0703 23:39:07.449364   45138 command_runner.go:130] > storage_option = [
	I0703 23:39:07.449541   45138 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0703 23:39:07.449589   45138 command_runner.go:130] > ]
	I0703 23:39:07.449603   45138 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0703 23:39:07.449616   45138 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0703 23:39:07.449889   45138 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0703 23:39:07.449904   45138 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0703 23:39:07.449915   45138 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0703 23:39:07.449923   45138 command_runner.go:130] > # always happen on a node reboot
	I0703 23:39:07.450205   45138 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0703 23:39:07.450225   45138 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0703 23:39:07.450235   45138 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0703 23:39:07.450243   45138 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0703 23:39:07.450372   45138 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0703 23:39:07.450387   45138 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0703 23:39:07.450399   45138 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0703 23:39:07.450720   45138 command_runner.go:130] > # internal_wipe = true
	I0703 23:39:07.450739   45138 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0703 23:39:07.450748   45138 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0703 23:39:07.451006   45138 command_runner.go:130] > # internal_repair = false
	I0703 23:39:07.451019   45138 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0703 23:39:07.451029   45138 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0703 23:39:07.451038   45138 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0703 23:39:07.451288   45138 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0703 23:39:07.451303   45138 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0703 23:39:07.451309   45138 command_runner.go:130] > [crio.api]
	I0703 23:39:07.451317   45138 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0703 23:39:07.451679   45138 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0703 23:39:07.451708   45138 command_runner.go:130] > # IP address on which the stream server will listen.
	I0703 23:39:07.451961   45138 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0703 23:39:07.451978   45138 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0703 23:39:07.451986   45138 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0703 23:39:07.452185   45138 command_runner.go:130] > # stream_port = "0"
	I0703 23:39:07.452199   45138 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0703 23:39:07.452414   45138 command_runner.go:130] > # stream_enable_tls = false
	I0703 23:39:07.452429   45138 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0703 23:39:07.452662   45138 command_runner.go:130] > # stream_idle_timeout = ""
	I0703 23:39:07.452677   45138 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0703 23:39:07.452686   45138 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0703 23:39:07.452692   45138 command_runner.go:130] > # minutes.
	I0703 23:39:07.452854   45138 command_runner.go:130] > # stream_tls_cert = ""
	I0703 23:39:07.452866   45138 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0703 23:39:07.452871   45138 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0703 23:39:07.453018   45138 command_runner.go:130] > # stream_tls_key = ""
	I0703 23:39:07.453033   45138 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0703 23:39:07.453042   45138 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0703 23:39:07.453066   45138 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0703 23:39:07.453226   45138 command_runner.go:130] > # stream_tls_ca = ""
	I0703 23:39:07.453242   45138 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0703 23:39:07.453383   45138 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0703 23:39:07.453399   45138 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0703 23:39:07.453689   45138 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0703 23:39:07.453707   45138 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0703 23:39:07.453718   45138 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0703 23:39:07.453724   45138 command_runner.go:130] > [crio.runtime]
	I0703 23:39:07.453735   45138 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0703 23:39:07.453746   45138 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0703 23:39:07.453757   45138 command_runner.go:130] > # "nofile=1024:2048"
	I0703 23:39:07.453767   45138 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0703 23:39:07.453820   45138 command_runner.go:130] > # default_ulimits = [
	I0703 23:39:07.453975   45138 command_runner.go:130] > # ]
	I0703 23:39:07.453992   45138 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0703 23:39:07.454289   45138 command_runner.go:130] > # no_pivot = false
	I0703 23:39:07.454303   45138 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0703 23:39:07.454313   45138 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0703 23:39:07.454628   45138 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0703 23:39:07.454655   45138 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0703 23:39:07.454664   45138 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0703 23:39:07.454676   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0703 23:39:07.454687   45138 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0703 23:39:07.454695   45138 command_runner.go:130] > # Cgroup setting for conmon
	I0703 23:39:07.454706   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0703 23:39:07.454716   45138 command_runner.go:130] > conmon_cgroup = "pod"
	I0703 23:39:07.454726   45138 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0703 23:39:07.454737   45138 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0703 23:39:07.454747   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0703 23:39:07.454756   45138 command_runner.go:130] > conmon_env = [
	I0703 23:39:07.454763   45138 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0703 23:39:07.454770   45138 command_runner.go:130] > ]
	I0703 23:39:07.454778   45138 command_runner.go:130] > # Additional environment variables to set for all the
	I0703 23:39:07.454790   45138 command_runner.go:130] > # containers. These are overridden if set in the
	I0703 23:39:07.454801   45138 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0703 23:39:07.454809   45138 command_runner.go:130] > # default_env = [
	I0703 23:39:07.454814   45138 command_runner.go:130] > # ]
	I0703 23:39:07.454827   45138 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0703 23:39:07.454839   45138 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0703 23:39:07.454847   45138 command_runner.go:130] > # selinux = false
	I0703 23:39:07.454853   45138 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0703 23:39:07.454866   45138 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0703 23:39:07.454879   45138 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0703 23:39:07.454888   45138 command_runner.go:130] > # seccomp_profile = ""
	I0703 23:39:07.454896   45138 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0703 23:39:07.454908   45138 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0703 23:39:07.454920   45138 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0703 23:39:07.454930   45138 command_runner.go:130] > # which might increase security.
	I0703 23:39:07.454937   45138 command_runner.go:130] > # This option is currently deprecated,
	I0703 23:39:07.454948   45138 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0703 23:39:07.454959   45138 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0703 23:39:07.454970   45138 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0703 23:39:07.454984   45138 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0703 23:39:07.454994   45138 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0703 23:39:07.455006   45138 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0703 23:39:07.455013   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.455028   45138 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0703 23:39:07.455040   45138 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0703 23:39:07.455049   45138 command_runner.go:130] > # the cgroup blockio controller.
	I0703 23:39:07.455057   45138 command_runner.go:130] > # blockio_config_file = ""
	I0703 23:39:07.455068   45138 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0703 23:39:07.455078   45138 command_runner.go:130] > # blockio parameters.
	I0703 23:39:07.455086   45138 command_runner.go:130] > # blockio_reload = false
	I0703 23:39:07.455099   45138 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0703 23:39:07.455105   45138 command_runner.go:130] > # irqbalance daemon.
	I0703 23:39:07.455111   45138 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0703 23:39:07.455123   45138 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0703 23:39:07.455137   45138 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0703 23:39:07.455151   45138 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0703 23:39:07.455160   45138 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0703 23:39:07.455174   45138 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0703 23:39:07.455183   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.455190   45138 command_runner.go:130] > # rdt_config_file = ""
	I0703 23:39:07.455199   45138 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0703 23:39:07.455210   45138 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0703 23:39:07.455247   45138 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0703 23:39:07.455258   45138 command_runner.go:130] > # separate_pull_cgroup = ""
	I0703 23:39:07.455268   45138 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0703 23:39:07.455278   45138 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0703 23:39:07.455288   45138 command_runner.go:130] > # will be added.
	I0703 23:39:07.455295   45138 command_runner.go:130] > # default_capabilities = [
	I0703 23:39:07.455302   45138 command_runner.go:130] > # 	"CHOWN",
	I0703 23:39:07.455309   45138 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0703 23:39:07.455317   45138 command_runner.go:130] > # 	"FSETID",
	I0703 23:39:07.455323   45138 command_runner.go:130] > # 	"FOWNER",
	I0703 23:39:07.455328   45138 command_runner.go:130] > # 	"SETGID",
	I0703 23:39:07.455333   45138 command_runner.go:130] > # 	"SETUID",
	I0703 23:39:07.455339   45138 command_runner.go:130] > # 	"SETPCAP",
	I0703 23:39:07.455343   45138 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0703 23:39:07.455349   45138 command_runner.go:130] > # 	"KILL",
	I0703 23:39:07.455357   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455369   45138 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0703 23:39:07.455381   45138 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0703 23:39:07.455387   45138 command_runner.go:130] > # add_inheritable_capabilities = false
	I0703 23:39:07.455400   45138 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0703 23:39:07.455411   45138 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0703 23:39:07.455421   45138 command_runner.go:130] > default_sysctls = [
	I0703 23:39:07.455432   45138 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0703 23:39:07.455439   45138 command_runner.go:130] > ]
	I0703 23:39:07.455447   45138 command_runner.go:130] > # List of devices on the host that a
	I0703 23:39:07.455459   45138 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0703 23:39:07.455465   45138 command_runner.go:130] > # allowed_devices = [
	I0703 23:39:07.455471   45138 command_runner.go:130] > # 	"/dev/fuse",
	I0703 23:39:07.455474   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455479   45138 command_runner.go:130] > # List of additional devices. specified as
	I0703 23:39:07.455485   45138 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0703 23:39:07.455494   45138 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0703 23:39:07.455502   45138 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0703 23:39:07.455506   45138 command_runner.go:130] > # additional_devices = [
	I0703 23:39:07.455510   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455515   45138 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0703 23:39:07.455521   45138 command_runner.go:130] > # cdi_spec_dirs = [
	I0703 23:39:07.455525   45138 command_runner.go:130] > # 	"/etc/cdi",
	I0703 23:39:07.455535   45138 command_runner.go:130] > # 	"/var/run/cdi",
	I0703 23:39:07.455543   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455554   45138 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0703 23:39:07.455566   45138 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0703 23:39:07.455573   45138 command_runner.go:130] > # Defaults to false.
	I0703 23:39:07.455581   45138 command_runner.go:130] > # device_ownership_from_security_context = false
	I0703 23:39:07.455594   45138 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0703 23:39:07.455603   45138 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0703 23:39:07.455608   45138 command_runner.go:130] > # hooks_dir = [
	I0703 23:39:07.455616   45138 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0703 23:39:07.455624   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455633   45138 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0703 23:39:07.455647   45138 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0703 23:39:07.455659   45138 command_runner.go:130] > # its default mounts from the following two files:
	I0703 23:39:07.455666   45138 command_runner.go:130] > #
	I0703 23:39:07.455676   45138 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0703 23:39:07.455689   45138 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0703 23:39:07.455701   45138 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0703 23:39:07.455710   45138 command_runner.go:130] > #
	I0703 23:39:07.455721   45138 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0703 23:39:07.455734   45138 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0703 23:39:07.455746   45138 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0703 23:39:07.455757   45138 command_runner.go:130] > #      only add mounts it finds in this file.
	I0703 23:39:07.455762   45138 command_runner.go:130] > #
	I0703 23:39:07.455771   45138 command_runner.go:130] > # default_mounts_file = ""
	I0703 23:39:07.455779   45138 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0703 23:39:07.455796   45138 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0703 23:39:07.455802   45138 command_runner.go:130] > pids_limit = 1024
	I0703 23:39:07.455814   45138 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0703 23:39:07.455827   45138 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0703 23:39:07.455838   45138 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0703 23:39:07.455853   45138 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0703 23:39:07.455858   45138 command_runner.go:130] > # log_size_max = -1
	I0703 23:39:07.455865   45138 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0703 23:39:07.455883   45138 command_runner.go:130] > # log_to_journald = false
	I0703 23:39:07.455896   45138 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0703 23:39:07.455915   45138 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0703 23:39:07.455926   45138 command_runner.go:130] > # Path to directory for container attach sockets.
	I0703 23:39:07.455936   45138 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0703 23:39:07.455943   45138 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0703 23:39:07.455951   45138 command_runner.go:130] > # bind_mount_prefix = ""
	I0703 23:39:07.455959   45138 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0703 23:39:07.455968   45138 command_runner.go:130] > # read_only = false
	I0703 23:39:07.455979   45138 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0703 23:39:07.455991   45138 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0703 23:39:07.456001   45138 command_runner.go:130] > # live configuration reload.
	I0703 23:39:07.456007   45138 command_runner.go:130] > # log_level = "info"
	I0703 23:39:07.456017   45138 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0703 23:39:07.456028   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.456036   45138 command_runner.go:130] > # log_filter = ""
	I0703 23:39:07.456047   45138 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0703 23:39:07.456061   45138 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0703 23:39:07.456067   45138 command_runner.go:130] > # separated by comma.
	I0703 23:39:07.456082   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456091   45138 command_runner.go:130] > # uid_mappings = ""
	I0703 23:39:07.456101   45138 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0703 23:39:07.456113   45138 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0703 23:39:07.456122   45138 command_runner.go:130] > # separated by comma.
	I0703 23:39:07.456134   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456143   45138 command_runner.go:130] > # gid_mappings = ""
	I0703 23:39:07.456153   45138 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0703 23:39:07.456164   45138 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0703 23:39:07.456179   45138 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0703 23:39:07.456193   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456199   45138 command_runner.go:130] > # minimum_mappable_uid = -1
	I0703 23:39:07.456208   45138 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0703 23:39:07.456221   45138 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0703 23:39:07.456234   45138 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0703 23:39:07.456249   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456258   45138 command_runner.go:130] > # minimum_mappable_gid = -1
	I0703 23:39:07.456268   45138 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0703 23:39:07.456281   45138 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0703 23:39:07.456301   45138 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0703 23:39:07.456311   45138 command_runner.go:130] > # ctr_stop_timeout = 30
	I0703 23:39:07.456320   45138 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0703 23:39:07.456329   45138 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0703 23:39:07.456334   45138 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0703 23:39:07.456341   45138 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0703 23:39:07.456345   45138 command_runner.go:130] > drop_infra_ctr = false
	I0703 23:39:07.456355   45138 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0703 23:39:07.456366   45138 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0703 23:39:07.456380   45138 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0703 23:39:07.456388   45138 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0703 23:39:07.456397   45138 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0703 23:39:07.456409   45138 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0703 23:39:07.456421   45138 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0703 23:39:07.456510   45138 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0703 23:39:07.456522   45138 command_runner.go:130] > # shared_cpuset = ""
	I0703 23:39:07.456534   45138 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0703 23:39:07.456545   45138 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0703 23:39:07.456552   45138 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0703 23:39:07.456568   45138 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0703 23:39:07.456577   45138 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0703 23:39:07.456587   45138 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0703 23:39:07.456599   45138 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0703 23:39:07.456609   45138 command_runner.go:130] > # enable_criu_support = false
	I0703 23:39:07.456617   45138 command_runner.go:130] > # Enable/disable the generation of the container,
	I0703 23:39:07.456636   45138 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0703 23:39:07.456647   45138 command_runner.go:130] > # enable_pod_events = false
	I0703 23:39:07.456657   45138 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0703 23:39:07.456670   45138 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0703 23:39:07.456681   45138 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0703 23:39:07.456691   45138 command_runner.go:130] > # default_runtime = "runc"
	I0703 23:39:07.456699   45138 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0703 23:39:07.456709   45138 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0703 23:39:07.456725   45138 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0703 23:39:07.456737   45138 command_runner.go:130] > # creation as a file is not desired either.
	I0703 23:39:07.456770   45138 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0703 23:39:07.456788   45138 command_runner.go:130] > # the hostname is being managed dynamically.
	I0703 23:39:07.456799   45138 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0703 23:39:07.456806   45138 command_runner.go:130] > # ]
	I0703 23:39:07.456818   45138 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0703 23:39:07.456831   45138 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0703 23:39:07.456843   45138 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0703 23:39:07.456853   45138 command_runner.go:130] > # Each entry in the table should follow the format:
	I0703 23:39:07.456861   45138 command_runner.go:130] > #
	I0703 23:39:07.456868   45138 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0703 23:39:07.456878   45138 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0703 23:39:07.456936   45138 command_runner.go:130] > # runtime_type = "oci"
	I0703 23:39:07.456947   45138 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0703 23:39:07.456958   45138 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0703 23:39:07.456966   45138 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0703 23:39:07.456973   45138 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0703 23:39:07.456979   45138 command_runner.go:130] > # monitor_env = []
	I0703 23:39:07.456989   45138 command_runner.go:130] > # privileged_without_host_devices = false
	I0703 23:39:07.457000   45138 command_runner.go:130] > # allowed_annotations = []
	I0703 23:39:07.457011   45138 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0703 23:39:07.457019   45138 command_runner.go:130] > # Where:
	I0703 23:39:07.457030   45138 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0703 23:39:07.457042   45138 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0703 23:39:07.457054   45138 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0703 23:39:07.457063   45138 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0703 23:39:07.457071   45138 command_runner.go:130] > #   in $PATH.
	I0703 23:39:07.457093   45138 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0703 23:39:07.457106   45138 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0703 23:39:07.457127   45138 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0703 23:39:07.457136   45138 command_runner.go:130] > #   state.
	I0703 23:39:07.457143   45138 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0703 23:39:07.457154   45138 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0703 23:39:07.457173   45138 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0703 23:39:07.457185   45138 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0703 23:39:07.457197   45138 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0703 23:39:07.457210   45138 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0703 23:39:07.457221   45138 command_runner.go:130] > #   The currently recognized values are:
	I0703 23:39:07.457237   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0703 23:39:07.457255   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0703 23:39:07.457268   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0703 23:39:07.457280   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0703 23:39:07.457295   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0703 23:39:07.457308   45138 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0703 23:39:07.457317   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0703 23:39:07.457329   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0703 23:39:07.457343   45138 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0703 23:39:07.457356   45138 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0703 23:39:07.457366   45138 command_runner.go:130] > #   deprecated option "conmon".
	I0703 23:39:07.457380   45138 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0703 23:39:07.457391   45138 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0703 23:39:07.457405   45138 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0703 23:39:07.457415   45138 command_runner.go:130] > #   should be moved to the container's cgroup
	I0703 23:39:07.457425   45138 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0703 23:39:07.457437   45138 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0703 23:39:07.457451   45138 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0703 23:39:07.457462   45138 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0703 23:39:07.457470   45138 command_runner.go:130] > #
	I0703 23:39:07.457477   45138 command_runner.go:130] > # Using the seccomp notifier feature:
	I0703 23:39:07.457484   45138 command_runner.go:130] > #
	I0703 23:39:07.457492   45138 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0703 23:39:07.457501   45138 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0703 23:39:07.457514   45138 command_runner.go:130] > #
	I0703 23:39:07.457528   45138 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0703 23:39:07.457541   45138 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0703 23:39:07.457549   45138 command_runner.go:130] > #
	I0703 23:39:07.457559   45138 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0703 23:39:07.457567   45138 command_runner.go:130] > # feature.
	I0703 23:39:07.457573   45138 command_runner.go:130] > #
	I0703 23:39:07.457580   45138 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0703 23:39:07.457590   45138 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0703 23:39:07.457603   45138 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0703 23:39:07.457618   45138 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0703 23:39:07.457631   45138 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0703 23:39:07.457644   45138 command_runner.go:130] > #
	I0703 23:39:07.457656   45138 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0703 23:39:07.457665   45138 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0703 23:39:07.457672   45138 command_runner.go:130] > #
	I0703 23:39:07.457681   45138 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0703 23:39:07.457693   45138 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0703 23:39:07.457701   45138 command_runner.go:130] > #
	I0703 23:39:07.457712   45138 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0703 23:39:07.457724   45138 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0703 23:39:07.457732   45138 command_runner.go:130] > # limitation.
	I0703 23:39:07.457740   45138 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0703 23:39:07.457748   45138 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0703 23:39:07.457755   45138 command_runner.go:130] > runtime_type = "oci"
	I0703 23:39:07.457761   45138 command_runner.go:130] > runtime_root = "/run/runc"
	I0703 23:39:07.457769   45138 command_runner.go:130] > runtime_config_path = ""
	I0703 23:39:07.457780   45138 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0703 23:39:07.457788   45138 command_runner.go:130] > monitor_cgroup = "pod"
	I0703 23:39:07.457797   45138 command_runner.go:130] > monitor_exec_cgroup = ""
	I0703 23:39:07.457806   45138 command_runner.go:130] > monitor_env = [
	I0703 23:39:07.457818   45138 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0703 23:39:07.457823   45138 command_runner.go:130] > ]
	I0703 23:39:07.457832   45138 command_runner.go:130] > privileged_without_host_devices = false
	I0703 23:39:07.457839   45138 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0703 23:39:07.457849   45138 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0703 23:39:07.457862   45138 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0703 23:39:07.457877   45138 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0703 23:39:07.457892   45138 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0703 23:39:07.457904   45138 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0703 23:39:07.457920   45138 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0703 23:39:07.457940   45138 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0703 23:39:07.457953   45138 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0703 23:39:07.457966   45138 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0703 23:39:07.457979   45138 command_runner.go:130] > # Example:
	I0703 23:39:07.457989   45138 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0703 23:39:07.457999   45138 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0703 23:39:07.458011   45138 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0703 23:39:07.458029   45138 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0703 23:39:07.458038   45138 command_runner.go:130] > # cpuset = 0
	I0703 23:39:07.458047   45138 command_runner.go:130] > # cpushares = "0-1"
	I0703 23:39:07.458055   45138 command_runner.go:130] > # Where:
	I0703 23:39:07.458066   45138 command_runner.go:130] > # The workload name is workload-type.
	I0703 23:39:07.458083   45138 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0703 23:39:07.458095   45138 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0703 23:39:07.458107   45138 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0703 23:39:07.458122   45138 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0703 23:39:07.458134   45138 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0703 23:39:07.458146   45138 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0703 23:39:07.458157   45138 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0703 23:39:07.458187   45138 command_runner.go:130] > # Default value is set to true
	I0703 23:39:07.458195   45138 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0703 23:39:07.458202   45138 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0703 23:39:07.458209   45138 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0703 23:39:07.458219   45138 command_runner.go:130] > # Default value is set to 'false'
	I0703 23:39:07.458230   45138 command_runner.go:130] > # disable_hostport_mapping = false
	I0703 23:39:07.458244   45138 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0703 23:39:07.458251   45138 command_runner.go:130] > #
	I0703 23:39:07.458264   45138 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0703 23:39:07.458277   45138 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0703 23:39:07.458286   45138 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0703 23:39:07.458295   45138 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0703 23:39:07.458304   45138 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0703 23:39:07.458310   45138 command_runner.go:130] > [crio.image]
	I0703 23:39:07.458319   45138 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0703 23:39:07.458326   45138 command_runner.go:130] > # default_transport = "docker://"
	I0703 23:39:07.458339   45138 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0703 23:39:07.458349   45138 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0703 23:39:07.458356   45138 command_runner.go:130] > # global_auth_file = ""
	I0703 23:39:07.458363   45138 command_runner.go:130] > # The image used to instantiate infra containers.
	I0703 23:39:07.458368   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.458374   45138 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0703 23:39:07.458384   45138 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0703 23:39:07.458394   45138 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0703 23:39:07.458409   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.458416   45138 command_runner.go:130] > # pause_image_auth_file = ""
	I0703 23:39:07.458425   45138 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0703 23:39:07.458434   45138 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0703 23:39:07.458444   45138 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0703 23:39:07.458450   45138 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0703 23:39:07.458454   45138 command_runner.go:130] > # pause_command = "/pause"
	I0703 23:39:07.458461   45138 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0703 23:39:07.458471   45138 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0703 23:39:07.458480   45138 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0703 23:39:07.458493   45138 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0703 23:39:07.458503   45138 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0703 23:39:07.458515   45138 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0703 23:39:07.458524   45138 command_runner.go:130] > # pinned_images = [
	I0703 23:39:07.458532   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458539   45138 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0703 23:39:07.458550   45138 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0703 23:39:07.458564   45138 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0703 23:39:07.458577   45138 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0703 23:39:07.458587   45138 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0703 23:39:07.458596   45138 command_runner.go:130] > # signature_policy = ""
	I0703 23:39:07.458604   45138 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0703 23:39:07.458618   45138 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0703 23:39:07.458627   45138 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0703 23:39:07.458639   45138 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0703 23:39:07.458651   45138 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0703 23:39:07.458662   45138 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0703 23:39:07.458677   45138 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0703 23:39:07.458690   45138 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0703 23:39:07.458701   45138 command_runner.go:130] > # changing them here.
	I0703 23:39:07.458709   45138 command_runner.go:130] > # insecure_registries = [
	I0703 23:39:07.458712   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458725   45138 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0703 23:39:07.458737   45138 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0703 23:39:07.458746   45138 command_runner.go:130] > # image_volumes = "mkdir"
	I0703 23:39:07.458758   45138 command_runner.go:130] > # Temporary directory to use for storing big files
	I0703 23:39:07.458776   45138 command_runner.go:130] > # big_files_temporary_dir = ""
	I0703 23:39:07.458788   45138 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0703 23:39:07.458795   45138 command_runner.go:130] > # CNI plugins.
	I0703 23:39:07.458799   45138 command_runner.go:130] > [crio.network]
	I0703 23:39:07.458810   45138 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0703 23:39:07.458822   45138 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0703 23:39:07.458829   45138 command_runner.go:130] > # cni_default_network = ""
	I0703 23:39:07.458841   45138 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0703 23:39:07.458851   45138 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0703 23:39:07.458863   45138 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0703 23:39:07.458875   45138 command_runner.go:130] > # plugin_dirs = [
	I0703 23:39:07.458882   45138 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0703 23:39:07.458887   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458898   45138 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0703 23:39:07.458912   45138 command_runner.go:130] > [crio.metrics]
	I0703 23:39:07.458923   45138 command_runner.go:130] > # Globally enable or disable metrics support.
	I0703 23:39:07.458931   45138 command_runner.go:130] > enable_metrics = true
	I0703 23:39:07.458942   45138 command_runner.go:130] > # Specify enabled metrics collectors.
	I0703 23:39:07.458952   45138 command_runner.go:130] > # Per default all metrics are enabled.
	I0703 23:39:07.458962   45138 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0703 23:39:07.458970   45138 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0703 23:39:07.458978   45138 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0703 23:39:07.458984   45138 command_runner.go:130] > # metrics_collectors = [
	I0703 23:39:07.458987   45138 command_runner.go:130] > # 	"operations",
	I0703 23:39:07.458997   45138 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0703 23:39:07.459004   45138 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0703 23:39:07.459014   45138 command_runner.go:130] > # 	"operations_errors",
	I0703 23:39:07.459023   45138 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0703 23:39:07.459033   45138 command_runner.go:130] > # 	"image_pulls_by_name",
	I0703 23:39:07.459043   45138 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0703 23:39:07.459052   45138 command_runner.go:130] > # 	"image_pulls_failures",
	I0703 23:39:07.459061   45138 command_runner.go:130] > # 	"image_pulls_successes",
	I0703 23:39:07.459069   45138 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0703 23:39:07.459073   45138 command_runner.go:130] > # 	"image_layer_reuse",
	I0703 23:39:07.459079   45138 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0703 23:39:07.459086   45138 command_runner.go:130] > # 	"containers_oom_total",
	I0703 23:39:07.459099   45138 command_runner.go:130] > # 	"containers_oom",
	I0703 23:39:07.459105   45138 command_runner.go:130] > # 	"processes_defunct",
	I0703 23:39:07.459109   45138 command_runner.go:130] > # 	"operations_total",
	I0703 23:39:07.459117   45138 command_runner.go:130] > # 	"operations_latency_seconds",
	I0703 23:39:07.459121   45138 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0703 23:39:07.459127   45138 command_runner.go:130] > # 	"operations_errors_total",
	I0703 23:39:07.459131   45138 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0703 23:39:07.459137   45138 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0703 23:39:07.459146   45138 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0703 23:39:07.459156   45138 command_runner.go:130] > # 	"image_pulls_success_total",
	I0703 23:39:07.459166   45138 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0703 23:39:07.459180   45138 command_runner.go:130] > # 	"containers_oom_count_total",
	I0703 23:39:07.459191   45138 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0703 23:39:07.459200   45138 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0703 23:39:07.459208   45138 command_runner.go:130] > # ]
	I0703 23:39:07.459215   45138 command_runner.go:130] > # The port on which the metrics server will listen.
	I0703 23:39:07.459221   45138 command_runner.go:130] > # metrics_port = 9090
	I0703 23:39:07.459226   45138 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0703 23:39:07.459232   45138 command_runner.go:130] > # metrics_socket = ""
	I0703 23:39:07.459237   45138 command_runner.go:130] > # The certificate for the secure metrics server.
	I0703 23:39:07.459245   45138 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0703 23:39:07.459254   45138 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0703 23:39:07.459261   45138 command_runner.go:130] > # certificate on any modification event.
	I0703 23:39:07.459265   45138 command_runner.go:130] > # metrics_cert = ""
	I0703 23:39:07.459273   45138 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0703 23:39:07.459284   45138 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0703 23:39:07.459290   45138 command_runner.go:130] > # metrics_key = ""
	I0703 23:39:07.459295   45138 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0703 23:39:07.459302   45138 command_runner.go:130] > [crio.tracing]
	I0703 23:39:07.459307   45138 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0703 23:39:07.459313   45138 command_runner.go:130] > # enable_tracing = false
	I0703 23:39:07.459319   45138 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0703 23:39:07.459325   45138 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0703 23:39:07.459331   45138 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0703 23:39:07.459338   45138 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0703 23:39:07.459342   45138 command_runner.go:130] > # CRI-O NRI configuration.
	I0703 23:39:07.459355   45138 command_runner.go:130] > [crio.nri]
	I0703 23:39:07.459362   45138 command_runner.go:130] > # Globally enable or disable NRI.
	I0703 23:39:07.459368   45138 command_runner.go:130] > # enable_nri = false
	I0703 23:39:07.459378   45138 command_runner.go:130] > # NRI socket to listen on.
	I0703 23:39:07.459390   45138 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0703 23:39:07.459397   45138 command_runner.go:130] > # NRI plugin directory to use.
	I0703 23:39:07.459402   45138 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0703 23:39:07.459411   45138 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0703 23:39:07.459416   45138 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0703 23:39:07.459423   45138 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0703 23:39:07.459429   45138 command_runner.go:130] > # nri_disable_connections = false
	I0703 23:39:07.459435   45138 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0703 23:39:07.459440   45138 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0703 23:39:07.459447   45138 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0703 23:39:07.459452   45138 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0703 23:39:07.459457   45138 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0703 23:39:07.459463   45138 command_runner.go:130] > [crio.stats]
	I0703 23:39:07.459468   45138 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0703 23:39:07.459475   45138 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0703 23:39:07.459480   45138 command_runner.go:130] > # stats_collection_period = 0
	I0703 23:39:07.459512   45138 command_runner.go:130] ! time="2024-07-03 23:39:07.417534095Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0703 23:39:07.459527   45138 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0703 23:39:07.459631   45138 cni.go:84] Creating CNI manager for ""
	I0703 23:39:07.459639   45138 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0703 23:39:07.459647   45138 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:39:07.459676   45138 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-184661 NodeName:multinode-184661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:39:07.459833   45138 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-184661"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:39:07.459978   45138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:39:07.470855   45138 command_runner.go:130] > kubeadm
	I0703 23:39:07.470884   45138 command_runner.go:130] > kubectl
	I0703 23:39:07.470888   45138 command_runner.go:130] > kubelet
	I0703 23:39:07.470910   45138 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:39:07.470959   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:39:07.481273   45138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0703 23:39:07.500296   45138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:39:07.518906   45138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0703 23:39:07.538056   45138 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0703 23:39:07.542340   45138 command_runner.go:130] > 192.168.39.57	control-plane.minikube.internal
	I0703 23:39:07.542432   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:39:07.691341   45138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:39:07.708293   45138 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661 for IP: 192.168.39.57
	I0703 23:39:07.708317   45138 certs.go:194] generating shared ca certs ...
	I0703 23:39:07.708341   45138 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:39:07.708484   45138 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:39:07.708519   45138 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:39:07.708528   45138 certs.go:256] generating profile certs ...
	I0703 23:39:07.708614   45138 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/client.key
	I0703 23:39:07.708670   45138 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key.5a180a79
	I0703 23:39:07.708703   45138 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key
	I0703 23:39:07.708713   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:39:07.708727   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:39:07.708740   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:39:07.708752   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:39:07.708764   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:39:07.708776   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:39:07.708789   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:39:07.708802   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:39:07.708853   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:39:07.708880   45138 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:39:07.708889   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:39:07.708911   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:39:07.708933   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:39:07.708953   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:39:07.708991   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:39:07.709019   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:39:07.709032   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:39:07.709045   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:07.709602   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:39:07.737821   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:39:07.764513   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:39:07.791135   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:39:07.818515   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 23:39:07.845810   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:39:07.872860   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:39:07.899330   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:39:07.925389   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:39:07.950986   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:39:07.976269   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:39:08.001548   45138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:39:08.019375   45138 ssh_runner.go:195] Run: openssl version
	I0703 23:39:08.025753   45138 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0703 23:39:08.026020   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:39:08.038478   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043228   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043320   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043370   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.049159   45138 command_runner.go:130] > 51391683
	I0703 23:39:08.049239   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:39:08.059769   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:39:08.071828   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076593   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076621   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076671   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.086595   45138 command_runner.go:130] > 3ec20f2e
	I0703 23:39:08.086685   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:39:08.099207   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:39:08.113422   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118408   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118575   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118638   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.124796   45138 command_runner.go:130] > b5213941
	I0703 23:39:08.124909   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:39:08.139045   45138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:39:08.144571   45138 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:39:08.144605   45138 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0703 23:39:08.144616   45138 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0703 23:39:08.144626   45138 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0703 23:39:08.144639   45138 command_runner.go:130] > Access: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144647   45138 command_runner.go:130] > Modify: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144656   45138 command_runner.go:130] > Change: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144665   45138 command_runner.go:130] >  Birth: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144762   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 23:39:08.151314   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.151376   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 23:39:08.157569   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.157796   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 23:39:08.164119   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.164208   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 23:39:08.170821   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.170898   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 23:39:08.177846   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.177940   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 23:39:08.184167   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.184341   45138 kubeadm.go:391] StartCluster: {Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:39:08.184444   45138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:39:08.184516   45138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:39:08.223325   45138 command_runner.go:130] > b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021
	I0703 23:39:08.223355   45138 command_runner.go:130] > 9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e
	I0703 23:39:08.223364   45138 command_runner.go:130] > f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8
	I0703 23:39:08.223378   45138 command_runner.go:130] > 10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b
	I0703 23:39:08.223386   45138 command_runner.go:130] > d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824
	I0703 23:39:08.223395   45138 command_runner.go:130] > ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95
	I0703 23:39:08.223402   45138 command_runner.go:130] > a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831
	I0703 23:39:08.223435   45138 command_runner.go:130] > ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e
	I0703 23:39:08.225057   45138 cri.go:89] found id: "b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021"
	I0703 23:39:08.225078   45138 cri.go:89] found id: "9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e"
	I0703 23:39:08.225083   45138 cri.go:89] found id: "f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8"
	I0703 23:39:08.225091   45138 cri.go:89] found id: "10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b"
	I0703 23:39:08.225094   45138 cri.go:89] found id: "d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824"
	I0703 23:39:08.225097   45138 cri.go:89] found id: "ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95"
	I0703 23:39:08.225099   45138 cri.go:89] found id: "a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831"
	I0703 23:39:08.225102   45138 cri.go:89] found id: "ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e"
	I0703 23:39:08.225104   45138 cri.go:89] found id: ""
	I0703 23:39:08.225150   45138 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.665000224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050034664975772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=706a278e-a40c-4ae3-860d-93d68c60e951 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.665473943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f80191ec-8028-412b-bbc1-735c20c3bdd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.665550007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f80191ec-8028-412b-bbc1-735c20c3bdd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.666004325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f80191ec-8028-412b-bbc1-735c20c3bdd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.718533500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=717d56b0-b3f8-430d-8817-c15e0e2774b6 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.718639102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=717d56b0-b3f8-430d-8817-c15e0e2774b6 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.720012564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be1a26b5-062c-4318-b52c-f2edd36ecb26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.720421700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050034720397441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be1a26b5-062c-4318-b52c-f2edd36ecb26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.720950771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3cb6ac1-bf89-435a-a467-abb165652a48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.721027853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3cb6ac1-bf89-435a-a467-abb165652a48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.721373535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3cb6ac1-bf89-435a-a467-abb165652a48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.776067112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84cb1000-1c9d-4e7a-9b5f-3647922a511c name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.776176081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84cb1000-1c9d-4e7a-9b5f-3647922a511c name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.777744215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6f64664-365c-40f8-a0e2-d023660cb49f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.778291971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050034778264465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f64664-365c-40f8-a0e2-d023660cb49f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.778992196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47393d5a-51ed-4bea-82d6-c8985bed7640 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.779064560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47393d5a-51ed-4bea-82d6-c8985bed7640 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.779504224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47393d5a-51ed-4bea-82d6-c8985bed7640 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.829708332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eac4fd39-5982-436e-8ff5-073cae544fb4 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.829893062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eac4fd39-5982-436e-8ff5-073cae544fb4 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.831482789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=225eec0b-37c0-471a-be3d-06455c02cd66 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.832157664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050034832121616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=225eec0b-37c0-471a-be3d-06455c02cd66 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.833123100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6f9fca9-0a4a-4393-adc5-f295015bf008 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.833199230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6f9fca9-0a4a-4393-adc5-f295015bf008 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:40:34 multinode-184661 crio[2855]: time="2024-07-03 23:40:34.833717726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6f9fca9-0a4a-4393-adc5-f295015bf008 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ec4970b0c834a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   19c8adc51ae25       busybox-fc5497c4f-vxz7l
	891161ab87387       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   d69190efd9539       kindnet-p8ckf
	2201946bdde9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   0045d033faf88       coredns-7db6d8ff4d-cq58d
	06bcfe51368b9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   1eb4f441185dd       kube-proxy-ppwdr
	f1364de120bc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   ddcd3155983f6       storage-provisioner
	fcebe5779fdd2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ecb6cd672ae43       etcd-multinode-184661
	26bedd6174dc7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   075e262c83d19       kube-scheduler-multinode-184661
	c7439ac3eb623       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   688ba1e228cc8       kube-apiserver-multinode-184661
	daf43bf6dcaea       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   ceadb6ca914a6       kube-controller-manager-multinode-184661
	8b6f1745f7201       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   0399ae962937e       busybox-fc5497c4f-vxz7l
	b6d9ad30a6abf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   bfef010a94ec5       storage-provisioner
	9a2185302f0a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   d0f2c0c960b98       coredns-7db6d8ff4d-cq58d
	f90a18a533fad       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      7 minutes ago        Exited              kindnet-cni               0                   d62314963eab8       kindnet-p8ckf
	10e806db9236b       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago        Exited              kube-proxy                0                   f7aa32a824ac9       kube-proxy-ppwdr
	d2ab9ac7bf2fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   783603acdf170       etcd-multinode-184661
	ee8579204aa24       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago        Exited              kube-controller-manager   0                   94d8a77051e1b       kube-controller-manager-multinode-184661
	a0183156fe771       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago        Exited              kube-scheduler            0                   065c6eec4a4ef       kube-scheduler-multinode-184661
	ee987852f4c38       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago        Exited              kube-apiserver            0                   7dfc9760670b7       kube-apiserver-multinode-184661
	
	
	==> coredns [2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48531 - 45637 "HINFO IN 9048751508592838614.4917136754263494149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010150224s
	
	
	==> coredns [9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e] <==
	[INFO] 10.244.0.3:49691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652887s
	[INFO] 10.244.0.3:54574 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075872s
	[INFO] 10.244.0.3:59082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000052396s
	[INFO] 10.244.0.3:42277 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001192524s
	[INFO] 10.244.0.3:39993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059107s
	[INFO] 10.244.0.3:44327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080517s
	[INFO] 10.244.0.3:59790 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059188s
	[INFO] 10.244.1.2:32863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118916s
	[INFO] 10.244.1.2:57228 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091602s
	[INFO] 10.244.1.2:56855 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087559s
	[INFO] 10.244.1.2:36669 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097908s
	[INFO] 10.244.0.3:49612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169473s
	[INFO] 10.244.0.3:46927 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000741s
	[INFO] 10.244.0.3:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060157s
	[INFO] 10.244.0.3:44079 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099302s
	[INFO] 10.244.1.2:38153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195374s
	[INFO] 10.244.1.2:33161 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122803s
	[INFO] 10.244.1.2:54373 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097172s
	[INFO] 10.244.1.2:42054 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000248363s
	[INFO] 10.244.0.3:49980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140039s
	[INFO] 10.244.0.3:51889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074447s
	[INFO] 10.244.0.3:40815 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082516s
	[INFO] 10.244.0.3:50422 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006287s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-184661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-184661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=multinode-184661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_33_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:33:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-184661
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:40:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    multinode-184661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4bbb9aa92f246c490df3bdc3e5ca646
	  System UUID:                a4bbb9aa-92f2-46c4-90df-3bdc3e5ca646
	  Boot ID:                    42f5dd8c-0341-4ae6-8329-019bbb2ea5a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vxz7l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 coredns-7db6d8ff4d-cq58d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-multinode-184661                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m28s
	  kube-system                 kindnet-p8ckf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-multinode-184661             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-controller-manager-multinode-184661    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-proxy-ppwdr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-multinode-184661             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m34s (x8 over 7m34s)  kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x8 over 7m34s)  kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x7 over 7m34s)  kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s                  kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m29s                  kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s                  kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m16s                  node-controller  Node multinode-184661 event: Registered Node multinode-184661 in Controller
	  Normal  NodeReady                7m13s                  kubelet          Node multinode-184661 status is now: NodeReady
	  Normal  Starting                 85s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)      kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)      kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)      kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node multinode-184661 event: Registered Node multinode-184661 in Controller
	
	
	Name:               multinode-184661-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-184661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=multinode-184661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_39_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:39:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-184661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:39:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:39:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:39:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    multinode-184661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 597cfdbb4d4244ff86d4a496dc6c1d59
	  System UUID:                597cfdbb-4d42-44ff-86d4-a496dc6c1d59
	  Boot ID:                    5af469cb-7eaf-4a81-b705-0076bd404a30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fgmqc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-k29rj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-jqxqn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m38s                  kube-proxy  
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m44s (x2 over 6m44s)  kubelet     Node multinode-184661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x2 over 6m44s)  kubelet     Node multinode-184661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x2 over 6m44s)  kubelet     Node multinode-184661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m35s                  kubelet     Node multinode-184661-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  42s (x2 over 42s)      kubelet     Node multinode-184661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x2 over 42s)      kubelet     Node multinode-184661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x2 over 42s)      kubelet     Node multinode-184661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                33s                    kubelet     Node multinode-184661-m02 status is now: NodeReady
	
	
	Name:               multinode-184661-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-184661-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=multinode-184661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_40_23_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:40:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-184661-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:40:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:40:31 +0000   Wed, 03 Jul 2024 23:40:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:40:31 +0000   Wed, 03 Jul 2024 23:40:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:40:31 +0000   Wed, 03 Jul 2024 23:40:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:40:31 +0000   Wed, 03 Jul 2024 23:40:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-184661-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d319bcee638440dba16abb81caa86fc
	  System UUID:                5d319bce-e638-440d-ba16-abb81caa86fc
	  Boot ID:                    14ee002c-9e22-42db-837e-7cb4d7eeb153
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z9csj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-proxy-hcctk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  Starting                 5m51s                  kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-184661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m46s                  kubelet          Node multinode-184661-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m16s (x2 over 5m16s)  kubelet          Node multinode-184661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x2 over 5m16s)  kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x2 over 5m16s)  kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m8s                   kubelet          Node multinode-184661-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-184661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-184661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-184661-m03 event: Registered Node multinode-184661-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-184661-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.078031] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.168008] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147281] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.295824] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.278232] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.061879] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.610496] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[Jul 3 23:33] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.615710] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.069084] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.152291] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.156219] kauditd_printk_skb: 21 callbacks suppressed
	[Jul 3 23:34] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 3 23:39] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +0.156275] systemd-fstab-generator[2779]: Ignoring "noauto" option for root device
	[  +0.206199] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.143771] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.288969] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +2.034742] systemd-fstab-generator[2938]: Ignoring "noauto" option for root device
	[  +3.150901] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +0.084841] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.011637] kauditd_printk_skb: 82 callbacks suppressed
	[ +11.175458] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.834145] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[ +20.366696] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824] <==
	{"level":"info","ts":"2024-07-03T23:33:56.494933Z","caller":"traceutil/trace.go:171","msg":"trace[676682260] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"213.555707ms","start":"2024-07-03T23:33:56.28137Z","end":"2024-07-03T23:33:56.494926Z","steps":["trace[676682260] 'process raft request'  (duration: 213.184664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:33:56.495098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.614757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-03T23:33:56.495205Z","caller":"traceutil/trace.go:171","msg":"trace[1948048317] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:514; }","duration":"155.746396ms","start":"2024-07-03T23:33:56.339446Z","end":"2024-07-03T23:33:56.495192Z","steps":["trace[1948048317] 'agreement among raft nodes before linearized reading'  (duration: 155.511754ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:33:56.495108Z","caller":"traceutil/trace.go:171","msg":"trace[313011572] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"195.670851ms","start":"2024-07-03T23:33:56.29943Z","end":"2024-07-03T23:33:56.495101Z","steps":["trace[313011572] 'process raft request'  (duration: 195.300959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:33:56.786881Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.027427ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17815554558663876303 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:506 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:33:56.787738Z","caller":"traceutil/trace.go:171","msg":"trace[1896032271] linearizableReadLoop","detail":"{readStateIndex:534; appliedIndex:533; }","duration":"159.076729ms","start":"2024-07-03T23:33:56.628648Z","end":"2024-07-03T23:33:56.787725Z","steps":["trace[1896032271] 'read index received'  (duration: 32.935159ms)","trace[1896032271] 'applied index is now lower than readState.Index'  (duration: 126.139789ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:33:56.787834Z","caller":"traceutil/trace.go:171","msg":"trace[325940741] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"284.460847ms","start":"2024-07-03T23:33:56.503314Z","end":"2024-07-03T23:33:56.787774Z","steps":["trace[325940741] 'process raft request'  (duration: 158.302655ms)","trace[325940741] 'compare'  (duration: 124.433085ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:33:56.787909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.265585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-184661-m02\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-03T23:33:56.787955Z","caller":"traceutil/trace.go:171","msg":"trace[1356973007] range","detail":"{range_begin:/registry/minions/multinode-184661-m02; range_end:; response_count:1; response_revision:515; }","duration":"159.340832ms","start":"2024-07-03T23:33:56.628608Z","end":"2024-07-03T23:33:56.787949Z","steps":["trace[1356973007] 'agreement among raft nodes before linearized reading'  (duration: 159.251464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:34:38.537641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.249954ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17815554558663876616 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-184661-m03.17ded811b4da1088\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-184661-m03.17ded811b4da1088\" value_size:646 lease:8592182521809100596 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:34:38.538223Z","caller":"traceutil/trace.go:171","msg":"trace[749894674] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"204.227552ms","start":"2024-07-03T23:34:38.333975Z","end":"2024-07-03T23:34:38.538202Z","steps":["trace[749894674] 'read index received'  (duration: 82.305208ms)","trace[749894674] 'applied index is now lower than readState.Index'  (duration: 121.921063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:34:38.538404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.414878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-184661-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-03T23:34:38.538449Z","caller":"traceutil/trace.go:171","msg":"trace[1481507871] range","detail":"{range_begin:/registry/minions/multinode-184661-m03; range_end:; response_count:0; response_revision:603; }","duration":"204.492999ms","start":"2024-07-03T23:34:38.33395Z","end":"2024-07-03T23:34:38.538443Z","steps":["trace[1481507871] 'agreement among raft nodes before linearized reading'  (duration: 204.382393ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:34:38.538278Z","caller":"traceutil/trace.go:171","msg":"trace[519313792] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"256.793341ms","start":"2024-07-03T23:34:38.281462Z","end":"2024-07-03T23:34:38.538255Z","steps":["trace[519313792] 'process raft request'  (duration: 134.863562ms)","trace[519313792] 'compare'  (duration: 121.089035ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:34:38.541628Z","caller":"traceutil/trace.go:171","msg":"trace[1705781359] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"187.845623ms","start":"2024-07-03T23:34:38.353769Z","end":"2024-07-03T23:34:38.541614Z","steps":["trace[1705781359] 'process raft request'  (duration: 187.499862ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:37:33.516842Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-03T23:37:33.516986Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-184661","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-07-03T23:37:33.517112Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.517257Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.59746Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.597496Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-03T23:37:33.598922Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-07-03T23:37:33.60137Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:37:33.60149Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:37:33.601499Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-184661","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> etcd [fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e] <==
	{"level":"info","ts":"2024-07-03T23:39:12.276683Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-03T23:39:12.278282Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"79ee2fa200dbf73d","initial-advertise-peer-urls":["https://192.168.39.57:2380"],"listen-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-03T23:39:12.280853Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-03T23:39:12.276722Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:39:12.28101Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:39:12.282002Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"79ee2fa200dbf73d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282098Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.283027Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282392Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T23:39:12.283131Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T23:39:13.210407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.210528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.210697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.21078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.21087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.210909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.211006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.216897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-184661 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T23:39:13.218845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:39:13.219351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:39:13.224294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-07-03T23:39:13.224397Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T23:39:13.225941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T23:39:13.225756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:40:35 up 8 min,  0 users,  load average: 0.31, 0.22, 0.11
	Linux multinode-184661 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926] <==
	I0703 23:39:46.663352       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:39:56.669299       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:39:56.669558       1 main.go:227] handling current node
	I0703 23:39:56.669656       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:39:56.669704       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:39:56.669931       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:39:56.669976       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:40:06.683022       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:40:06.683061       1 main.go:227] handling current node
	I0703 23:40:06.683128       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:40:06.683135       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:40:06.683352       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:40:06.683377       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:40:16.731555       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:40:16.731778       1 main.go:227] handling current node
	I0703 23:40:16.731894       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:40:16.731916       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:40:16.732103       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:40:16.732125       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:40:26.741623       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:40:26.741732       1 main.go:227] handling current node
	I0703 23:40:26.741760       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:40:26.741880       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:40:26.742117       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:40:26.742241       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8] <==
	I0703 23:36:52.735205       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:02.747747       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:02.747855       1 main.go:227] handling current node
	I0703 23:37:02.747868       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:02.747873       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:02.748131       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:02.748166       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:12.760601       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:12.760649       1 main.go:227] handling current node
	I0703 23:37:12.760660       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:12.760665       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:12.760767       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:12.760919       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:22.765462       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:22.825920       1 main.go:227] handling current node
	I0703 23:37:22.826056       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:22.826084       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:22.826231       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:22.826255       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:32.839726       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:32.839751       1 main.go:227] handling current node
	I0703 23:37:32.839762       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:32.839766       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:32.840081       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:32.840091       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1] <==
	E0703 23:39:14.678417       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0703 23:39:14.701134       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:39:14.701285       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:39:14.701320       1 policy_source.go:224] refreshing policies
	I0703 23:39:14.742597       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:39:14.742912       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:39:14.743016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 23:39:14.747981       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0703 23:39:14.749241       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:39:14.749274       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:39:14.749288       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:39:14.749295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:39:14.749302       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:39:14.751192       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:39:14.751774       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 23:39:14.751873       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 23:39:14.752844       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 23:39:15.587942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 23:39:17.038271       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:39:17.181215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0703 23:39:17.201154       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:39:17.276199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 23:39:17.290459       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 23:39:26.982566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0703 23:39:27.233133       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e] <==
	W0703 23:37:33.539294       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551039       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551195       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551257       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551312       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551369       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551404       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551484       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551536       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551590       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551643       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551705       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551756       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.553107       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554015       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554081       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554136       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554203       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554261       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0703 23:37:33.554670       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555023       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555209       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555359       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555542       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555765       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be] <==
	I0703 23:39:27.672659       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:39:27.672712       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0703 23:39:27.675575       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:39:49.414401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.416137ms"
	I0703 23:39:49.414558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.094µs"
	I0703 23:39:49.427005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.94292ms"
	I0703 23:39:49.427366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.68µs"
	I0703 23:39:53.685393       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m02\" does not exist"
	I0703 23:39:53.699765       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m02" podCIDRs=["10.244.1.0/24"]
	I0703 23:39:55.576281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.638µs"
	I0703 23:39:55.585561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.434µs"
	I0703 23:39:55.634130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.868µs"
	I0703 23:39:55.642936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.632µs"
	I0703 23:39:55.647871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.057µs"
	I0703 23:39:57.862828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="584.811µs"
	I0703 23:40:02.712077       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:02.740982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.662µs"
	I0703 23:40:02.774576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.194µs"
	I0703 23:40:06.777183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.292007ms"
	I0703 23:40:06.777313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.95µs"
	I0703 23:40:21.486534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:22.600287       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:40:22.600648       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:22.623866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:40:31.710341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	
	
	==> kube-controller-manager [ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95] <==
	I0703 23:33:51.392433       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m02\" does not exist"
	I0703 23:33:51.411928       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m02" podCIDRs=["10.244.1.0/24"]
	I0703 23:33:54.824467       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-184661-m02"
	I0703 23:34:00.917058       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:34:03.242916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.776925ms"
	I0703 23:34:03.276086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.773954ms"
	I0703 23:34:03.296986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.83842ms"
	I0703 23:34:03.297342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="107.568µs"
	I0703 23:34:07.212622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.037004ms"
	I0703 23:34:07.220373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.039352ms"
	I0703 23:34:07.220511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.588µs"
	I0703 23:34:07.220574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.448µs"
	I0703 23:34:38.546673       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:34:38.547394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:34:38.556463       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:34:39.850667       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-184661-m03"
	I0703 23:34:49.089289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:18.344650       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:19.530660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:19.531278       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:35:19.554619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.3.0/24"]
	I0703 23:35:27.574039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:36:04.904321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m03"
	I0703 23:36:04.960697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.489012ms"
	I0703 23:36:04.960838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.437µs"
	
	
	==> kube-proxy [06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0] <==
	I0703 23:39:15.967689       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:39:15.985715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0703 23:39:16.082747       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:39:16.082902       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:39:16.082920       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:39:16.088563       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:39:16.088849       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:39:16.088879       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:39:16.091534       1 config.go:192] "Starting service config controller"
	I0703 23:39:16.091565       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:39:16.091594       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:39:16.091598       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:39:16.092512       1 config.go:319] "Starting node config controller"
	I0703 23:39:16.092545       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:39:16.192438       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:39:16.192508       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:39:16.192767       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b] <==
	I0703 23:33:21.607835       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:33:21.621356       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0703 23:33:21.694959       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:33:21.695023       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:33:21.695041       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:33:21.698693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:33:21.698983       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:33:21.699017       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:33:21.700220       1 config.go:192] "Starting service config controller"
	I0703 23:33:21.700234       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:33:21.700256       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:33:21.700260       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:33:21.702638       1 config.go:319] "Starting node config controller"
	I0703 23:33:21.702652       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:33:21.801988       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:33:21.804423       1 shared_informer.go:320] Caches are synced for node config
	I0703 23:33:21.802023       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a] <==
	I0703 23:39:12.901514       1 serving.go:380] Generated self-signed cert in-memory
	W0703 23:39:14.654550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 23:39:14.654642       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:39:14.654708       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 23:39:14.654716       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 23:39:14.679450       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:39:14.679498       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:39:14.681756       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:39:14.682003       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:39:14.682032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:39:14.682052       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:39:14.786953       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831] <==
	E0703 23:33:04.399878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:33:04.399884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 23:33:04.399891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:33:04.399898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0703 23:33:04.400198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:33:04.399200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:33:04.401190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:33:05.238774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0703 23:33:05.238898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0703 23:33:05.282087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:33:05.282182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:33:05.448944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:33:05.449048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:33:05.625182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0703 23:33:05.625280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0703 23:33:05.639972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:33:05.640174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 23:33:05.755142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 23:33:05.756232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 23:33:05.760358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:33:05.760468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:33:05.956066       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:33:05.956275       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0703 23:33:09.177922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:37:33.514438       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 03 23:39:11 multinode-184661 kubelet[3070]: E0703 23:39:11.966972    3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	Jul 03 23:39:12 multinode-184661 kubelet[3070]: I0703 23:39:12.489245    3070 kubelet_node_status.go:73] "Attempting to register node" node="multinode-184661"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.727627    3070 kubelet_node_status.go:112] "Node was previously registered" node="multinode-184661"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.728201    3070 kubelet_node_status.go:76] "Successfully registered node" node="multinode-184661"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.730170    3070 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.732085    3070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.944646    3070 apiserver.go:52] "Watching apiserver"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.948511    3070 topology_manager.go:215] "Topology Admit Handler" podUID="28591303-b860-4d2b-9c34-3fb77062ec2d" podNamespace="kube-system" podName="kindnet-p8ckf"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949278    3070 topology_manager.go:215] "Topology Admit Handler" podUID="3eda932b-d2db-481c-894d-6c0ed215c9dd" podNamespace="kube-system" podName="kube-proxy-ppwdr"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949361    3070 topology_manager.go:215] "Topology Admit Handler" podUID="420f9d81-e376-4e42-b8e6-7c5d783a5c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cq58d"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949406    3070 topology_manager.go:215] "Topology Admit Handler" podUID="08127733-cc97-4e47-b45f-623a612229c3" podNamespace="kube-system" podName="storage-provisioner"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949466    3070 topology_manager.go:215] "Topology Admit Handler" podUID="4bdcc46c-cbaa-4168-9496-4b9b393dc05d" podNamespace="default" podName="busybox-fc5497c4f-vxz7l"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.973609    3070 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.041656    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eda932b-d2db-481c-894d-6c0ed215c9dd-xtables-lock\") pod \"kube-proxy-ppwdr\" (UID: \"3eda932b-d2db-481c-894d-6c0ed215c9dd\") " pod="kube-system/kube-proxy-ppwdr"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042166    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eda932b-d2db-481c-894d-6c0ed215c9dd-lib-modules\") pod \"kube-proxy-ppwdr\" (UID: \"3eda932b-d2db-481c-894d-6c0ed215c9dd\") " pod="kube-system/kube-proxy-ppwdr"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042298    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-xtables-lock\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042701    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08127733-cc97-4e47-b45f-623a612229c3-tmp\") pod \"storage-provisioner\" (UID: \"08127733-cc97-4e47-b45f-623a612229c3\") " pod="kube-system/storage-provisioner"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042865    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-cni-cfg\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042931    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-lib-modules\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:18 multinode-184661 kubelet[3070]: I0703 23:39:18.998201    3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 03 23:40:11 multinode-184661 kubelet[3070]: E0703 23:40:11.030226    3070 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:40:34.373107   46210 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18998-9396/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-184661 -n multinode-184661
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-184661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (305.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 stop
E0703 23:41:17.046829   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:42:00.410703   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-184661 stop: exit status 82 (2m0.464662896s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-184661-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-184661 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 status: (18.874105305s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr: (3.359912077s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-184661 -n multinode-184661
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 logs -n 25: (1.534893295s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661:/home/docker/cp-test_multinode-184661-m02_multinode-184661.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661 sudo cat                                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m02_multinode-184661.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03:/home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661-m03 sudo cat                                   | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp testdata/cp-test.txt                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661:/home/docker/cp-test_multinode-184661-m03_multinode-184661.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661 sudo cat                                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m03_multinode-184661.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt                       | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m02:/home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n                                                                 | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | multinode-184661-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-184661 ssh -n multinode-184661-m02 sudo cat                                   | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:34 UTC |
	|         | /home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-184661 node stop m03                                                          | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:34 UTC | 03 Jul 24 23:35 UTC |
	| node    | multinode-184661 node start                                                             | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC | 03 Jul 24 23:35 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-184661                                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC |                     |
	| stop    | -p multinode-184661                                                                     | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:35 UTC |                     |
	| start   | -p multinode-184661                                                                     | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:37 UTC | 03 Jul 24 23:40 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-184661                                                                | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:40 UTC |                     |
	| node    | multinode-184661 node delete                                                            | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:40 UTC | 03 Jul 24 23:40 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-184661 stop                                                                   | multinode-184661 | jenkins | v1.33.1 | 03 Jul 24 23:40 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:37:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:37:32.564597   45138 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:37:32.564765   45138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:37:32.564776   45138 out.go:304] Setting ErrFile to fd 2...
	I0703 23:37:32.564783   45138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:37:32.564975   45138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:37:32.565557   45138 out.go:298] Setting JSON to false
	I0703 23:37:32.566502   45138 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4793,"bootTime":1720045060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:37:32.566578   45138 start.go:139] virtualization: kvm guest
	I0703 23:37:32.568902   45138 out.go:177] * [multinode-184661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:37:32.570497   45138 notify.go:220] Checking for updates...
	I0703 23:37:32.570519   45138 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:37:32.572030   45138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:37:32.573423   45138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:37:32.574765   45138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:37:32.576061   45138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:37:32.577207   45138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:37:32.578687   45138 config.go:182] Loaded profile config "multinode-184661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:37:32.578821   45138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:37:32.579300   45138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:37:32.579385   45138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:37:32.595960   45138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0703 23:37:32.596395   45138 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:37:32.596932   45138 main.go:141] libmachine: Using API Version  1
	I0703 23:37:32.596953   45138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:37:32.597330   45138 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:37:32.597516   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.633998   45138 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:37:32.635045   45138 start.go:297] selected driver: kvm2
	I0703 23:37:32.635069   45138 start.go:901] validating driver "kvm2" against &{Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:37:32.635241   45138 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:37:32.635668   45138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:37:32.635752   45138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:37:32.651587   45138 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:37:32.652565   45138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:37:32.652658   45138 cni.go:84] Creating CNI manager for ""
	I0703 23:37:32.652675   45138 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0703 23:37:32.652752   45138 start.go:340] cluster config:
	{Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:37:32.652939   45138 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:37:32.655313   45138 out.go:177] * Starting "multinode-184661" primary control-plane node in "multinode-184661" cluster
	I0703 23:37:32.656500   45138 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:37:32.656539   45138 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:37:32.656549   45138 cache.go:56] Caching tarball of preloaded images
	I0703 23:37:32.656644   45138 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:37:32.656660   45138 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:37:32.656780   45138 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/config.json ...
	I0703 23:37:32.656986   45138 start.go:360] acquireMachinesLock for multinode-184661: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:37:32.657029   45138 start.go:364] duration metric: took 24.01µs to acquireMachinesLock for "multinode-184661"
	I0703 23:37:32.657047   45138 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:37:32.657064   45138 fix.go:54] fixHost starting: 
	I0703 23:37:32.657317   45138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:37:32.657351   45138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:37:32.671899   45138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34057
	I0703 23:37:32.672307   45138 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:37:32.672740   45138 main.go:141] libmachine: Using API Version  1
	I0703 23:37:32.672760   45138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:37:32.673017   45138 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:37:32.673178   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.673321   45138 main.go:141] libmachine: (multinode-184661) Calling .GetState
	I0703 23:37:32.674764   45138 fix.go:112] recreateIfNeeded on multinode-184661: state=Running err=<nil>
	W0703 23:37:32.674794   45138 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:37:32.677323   45138 out.go:177] * Updating the running kvm2 "multinode-184661" VM ...
	I0703 23:37:32.678737   45138 machine.go:94] provisionDockerMachine start ...
	I0703 23:37:32.678768   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:37:32.678986   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.681466   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.681924   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.681948   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.682211   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.682388   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.682549   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.682668   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.682804   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.683029   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.683042   45138 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:37:32.797416   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-184661
	
	I0703 23:37:32.797455   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:32.797709   45138 buildroot.go:166] provisioning hostname "multinode-184661"
	I0703 23:37:32.797739   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:32.797917   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.800595   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.801036   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.801065   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.801238   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.801431   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.801600   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.801729   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.801922   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.802117   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.802131   45138 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-184661 && echo "multinode-184661" | sudo tee /etc/hostname
	I0703 23:37:32.927700   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-184661
	
	I0703 23:37:32.927743   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:32.930800   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.931270   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:32.931317   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:32.931459   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:32.931651   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.931842   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:32.931984   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:32.932144   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:32.932314   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:32.932328   45138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-184661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-184661/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-184661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:37:33.049416   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:37:33.049452   45138 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:37:33.049484   45138 buildroot.go:174] setting up certificates
	I0703 23:37:33.049495   45138 provision.go:84] configureAuth start
	I0703 23:37:33.049510   45138 main.go:141] libmachine: (multinode-184661) Calling .GetMachineName
	I0703 23:37:33.049757   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:37:33.052587   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.052928   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.052957   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.053061   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.055008   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.055321   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.055353   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.055555   45138 provision.go:143] copyHostCerts
	I0703 23:37:33.055583   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:37:33.055639   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:37:33.055652   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:37:33.055737   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:37:33.055847   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:37:33.055869   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:37:33.055893   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:37:33.055942   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:37:33.056010   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:37:33.056026   45138 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:37:33.056033   45138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:37:33.056056   45138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:37:33.056145   45138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.multinode-184661 san=[127.0.0.1 192.168.39.57 localhost minikube multinode-184661]
	I0703 23:37:33.200798   45138 provision.go:177] copyRemoteCerts
	I0703 23:37:33.200852   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:37:33.200873   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.203311   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.203679   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.203721   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.203846   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:33.204033   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.204189   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:33.204386   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:37:33.291213   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0703 23:37:33.291302   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:37:33.319234   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0703 23:37:33.319305   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0703 23:37:33.346178   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0703 23:37:33.346264   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:37:33.374256   45138 provision.go:87] duration metric: took 324.746808ms to configureAuth
	I0703 23:37:33.374285   45138 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:37:33.374502   45138 config.go:182] Loaded profile config "multinode-184661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:37:33.374563   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:37:33.377364   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.377768   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:37:33.377813   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:37:33.377986   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:37:33.378198   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.378362   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:37:33.378506   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:37:33.378648   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:37:33.378806   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:37:33.378821   45138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:39:04.085884   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:39:04.085909   45138 machine.go:97] duration metric: took 1m31.407157506s to provisionDockerMachine
	I0703 23:39:04.085927   45138 start.go:293] postStartSetup for "multinode-184661" (driver="kvm2")
	I0703 23:39:04.085939   45138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:39:04.085961   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.086295   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:39:04.086327   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.089431   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.089899   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.089925   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.090104   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.090337   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.090502   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.090645   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.180892   45138 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:39:04.185571   45138 command_runner.go:130] > NAME=Buildroot
	I0703 23:39:04.185595   45138 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0703 23:39:04.185600   45138 command_runner.go:130] > ID=buildroot
	I0703 23:39:04.185607   45138 command_runner.go:130] > VERSION_ID=2023.02.9
	I0703 23:39:04.185615   45138 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0703 23:39:04.185674   45138 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:39:04.185700   45138 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:39:04.185760   45138 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:39:04.185835   45138 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:39:04.185846   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /etc/ssl/certs/165742.pem
	I0703 23:39:04.185922   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:39:04.196354   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:39:04.223891   45138 start.go:296] duration metric: took 137.935599ms for postStartSetup
	I0703 23:39:04.223940   45138 fix.go:56] duration metric: took 1m31.566881589s for fixHost
	I0703 23:39:04.223961   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.226621   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.227121   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.227165   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.227392   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.227611   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.227794   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.227944   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.228104   45138 main.go:141] libmachine: Using SSH client type: native
	I0703 23:39:04.228307   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0703 23:39:04.228323   45138 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:39:04.341012   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720049944.319080588
	
	I0703 23:39:04.341029   45138 fix.go:216] guest clock: 1720049944.319080588
	I0703 23:39:04.341036   45138 fix.go:229] Guest: 2024-07-03 23:39:04.319080588 +0000 UTC Remote: 2024-07-03 23:39:04.223944588 +0000 UTC m=+91.695090994 (delta=95.136ms)
	I0703 23:39:04.341061   45138 fix.go:200] guest clock delta is within tolerance: 95.136ms
	I0703 23:39:04.341067   45138 start.go:83] releasing machines lock for "multinode-184661", held for 1m31.684027144s
	I0703 23:39:04.341094   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.341373   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:39:04.343745   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.344117   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.344137   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.344347   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.344827   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.345002   45138 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:39:04.345098   45138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:39:04.345126   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.345176   45138 ssh_runner.go:195] Run: cat /version.json
	I0703 23:39:04.345199   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:39:04.347741   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348075   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.348104   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348122   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348252   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.348430   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.348615   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:04.348636   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:04.348638   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.348796   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:39:04.348795   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.348967   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:39:04.349100   45138 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:39:04.349272   45138 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:39:04.457815   45138 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0703 23:39:04.457865   45138 command_runner.go:130] > {"iso_version": "v1.33.1-1719929171-19175", "kicbase_version": "v0.0.44-1719600828-19153", "minikube_version": "v1.33.1", "commit": "0ba4fd2d2d09aa0a2e53d6947bc1076c219d88c0"}
	I0703 23:39:04.458017   45138 ssh_runner.go:195] Run: systemctl --version
	I0703 23:39:04.464261   45138 command_runner.go:130] > systemd 252 (252)
	I0703 23:39:04.464303   45138 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0703 23:39:04.464659   45138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:39:04.639192   45138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0703 23:39:04.645690   45138 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0703 23:39:04.645834   45138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:39:04.645894   45138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:39:04.656022   45138 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 23:39:04.656052   45138 start.go:494] detecting cgroup driver to use...
	I0703 23:39:04.656122   45138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:39:04.673655   45138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:39:04.689504   45138 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:39:04.689561   45138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:39:04.705030   45138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:39:04.720290   45138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:39:04.872473   45138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:39:05.017221   45138 docker.go:233] disabling docker service ...
	I0703 23:39:05.017287   45138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:39:05.034393   45138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:39:05.074892   45138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:39:05.221361   45138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:39:05.366363   45138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:39:05.381629   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:39:05.403117   45138 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0703 23:39:05.403663   45138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:39:05.403726   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.415804   45138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:39:05.415870   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.427027   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.438396   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.449771   45138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:39:05.461346   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.472698   45138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.485862   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:39:05.497292   45138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:39:05.507707   45138 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0703 23:39:05.508021   45138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:39:05.518637   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:39:05.663416   45138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:39:07.192662   45138 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.529206076s)
	I0703 23:39:07.192699   45138 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:39:07.192752   45138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:39:07.198235   45138 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0703 23:39:07.198268   45138 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0703 23:39:07.198278   45138 command_runner.go:130] > Device: 0,22	Inode: 1337        Links: 1
	I0703 23:39:07.198288   45138 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0703 23:39:07.198296   45138 command_runner.go:130] > Access: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198316   45138 command_runner.go:130] > Modify: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198328   45138 command_runner.go:130] > Change: 2024-07-03 23:39:07.048780287 +0000
	I0703 23:39:07.198333   45138 command_runner.go:130] >  Birth: -
	I0703 23:39:07.198370   45138 start.go:562] Will wait 60s for crictl version
	I0703 23:39:07.198426   45138 ssh_runner.go:195] Run: which crictl
	I0703 23:39:07.203041   45138 command_runner.go:130] > /usr/bin/crictl
	I0703 23:39:07.203206   45138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:39:07.239314   45138 command_runner.go:130] > Version:  0.1.0
	I0703 23:39:07.239346   45138 command_runner.go:130] > RuntimeName:  cri-o
	I0703 23:39:07.239354   45138 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0703 23:39:07.239452   45138 command_runner.go:130] > RuntimeApiVersion:  v1
	I0703 23:39:07.240857   45138 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:39:07.240934   45138 ssh_runner.go:195] Run: crio --version
	I0703 23:39:07.270903   45138 command_runner.go:130] > crio version 1.29.1
	I0703 23:39:07.270930   45138 command_runner.go:130] > Version:        1.29.1
	I0703 23:39:07.270939   45138 command_runner.go:130] > GitCommit:      unknown
	I0703 23:39:07.270945   45138 command_runner.go:130] > GitCommitDate:  unknown
	I0703 23:39:07.270952   45138 command_runner.go:130] > GitTreeState:   clean
	I0703 23:39:07.270961   45138 command_runner.go:130] > BuildDate:      2024-07-02T19:36:05Z
	I0703 23:39:07.270968   45138 command_runner.go:130] > GoVersion:      go1.21.6
	I0703 23:39:07.270974   45138 command_runner.go:130] > Compiler:       gc
	I0703 23:39:07.270982   45138 command_runner.go:130] > Platform:       linux/amd64
	I0703 23:39:07.270989   45138 command_runner.go:130] > Linkmode:       dynamic
	I0703 23:39:07.270996   45138 command_runner.go:130] > BuildTags:      
	I0703 23:39:07.271003   45138 command_runner.go:130] >   containers_image_ostree_stub
	I0703 23:39:07.271011   45138 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0703 23:39:07.271018   45138 command_runner.go:130] >   btrfs_noversion
	I0703 23:39:07.271025   45138 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0703 23:39:07.271036   45138 command_runner.go:130] >   libdm_no_deferred_remove
	I0703 23:39:07.271043   45138 command_runner.go:130] >   seccomp
	I0703 23:39:07.271050   45138 command_runner.go:130] > LDFlags:          unknown
	I0703 23:39:07.271059   45138 command_runner.go:130] > SeccompEnabled:   true
	I0703 23:39:07.271067   45138 command_runner.go:130] > AppArmorEnabled:  false
	I0703 23:39:07.272378   45138 ssh_runner.go:195] Run: crio --version
	I0703 23:39:07.315509   45138 command_runner.go:130] > crio version 1.29.1
	I0703 23:39:07.315538   45138 command_runner.go:130] > Version:        1.29.1
	I0703 23:39:07.315546   45138 command_runner.go:130] > GitCommit:      unknown
	I0703 23:39:07.315553   45138 command_runner.go:130] > GitCommitDate:  unknown
	I0703 23:39:07.315558   45138 command_runner.go:130] > GitTreeState:   clean
	I0703 23:39:07.315570   45138 command_runner.go:130] > BuildDate:      2024-07-02T19:36:05Z
	I0703 23:39:07.315577   45138 command_runner.go:130] > GoVersion:      go1.21.6
	I0703 23:39:07.315582   45138 command_runner.go:130] > Compiler:       gc
	I0703 23:39:07.315589   45138 command_runner.go:130] > Platform:       linux/amd64
	I0703 23:39:07.315595   45138 command_runner.go:130] > Linkmode:       dynamic
	I0703 23:39:07.315603   45138 command_runner.go:130] > BuildTags:      
	I0703 23:39:07.315610   45138 command_runner.go:130] >   containers_image_ostree_stub
	I0703 23:39:07.315617   45138 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0703 23:39:07.315627   45138 command_runner.go:130] >   btrfs_noversion
	I0703 23:39:07.315634   45138 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0703 23:39:07.315641   45138 command_runner.go:130] >   libdm_no_deferred_remove
	I0703 23:39:07.315648   45138 command_runner.go:130] >   seccomp
	I0703 23:39:07.315656   45138 command_runner.go:130] > LDFlags:          unknown
	I0703 23:39:07.315662   45138 command_runner.go:130] > SeccompEnabled:   true
	I0703 23:39:07.315669   45138 command_runner.go:130] > AppArmorEnabled:  false
	I0703 23:39:07.317981   45138 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:39:07.319291   45138 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:39:07.322040   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:07.322441   45138 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:39:07.322469   45138 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:39:07.322797   45138 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:39:07.328055   45138 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0703 23:39:07.328422   45138 kubeadm.go:877] updating cluster {Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:39:07.328563   45138 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:39:07.328617   45138 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:39:07.364603   45138 command_runner.go:130] > {
	I0703 23:39:07.364630   45138 command_runner.go:130] >   "images": [
	I0703 23:39:07.364636   45138 command_runner.go:130] >     {
	I0703 23:39:07.364649   45138 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0703 23:39:07.364656   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364667   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0703 23:39:07.364673   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364679   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364709   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0703 23:39:07.364725   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0703 23:39:07.364731   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364738   45138 command_runner.go:130] >       "size": "65908273",
	I0703 23:39:07.364745   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.364752   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.364764   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.364770   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.364776   45138 command_runner.go:130] >     },
	I0703 23:39:07.364781   45138 command_runner.go:130] >     {
	I0703 23:39:07.364791   45138 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0703 23:39:07.364797   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364805   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0703 23:39:07.364811   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364818   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364830   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0703 23:39:07.364841   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0703 23:39:07.364848   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364856   45138 command_runner.go:130] >       "size": "1363676",
	I0703 23:39:07.364865   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.364877   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.364887   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.364895   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.364903   45138 command_runner.go:130] >     },
	I0703 23:39:07.364916   45138 command_runner.go:130] >     {
	I0703 23:39:07.364930   45138 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0703 23:39:07.364939   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.364952   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0703 23:39:07.364962   45138 command_runner.go:130] >       ],
	I0703 23:39:07.364971   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.364989   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0703 23:39:07.365004   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0703 23:39:07.365013   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365020   45138 command_runner.go:130] >       "size": "31470524",
	I0703 23:39:07.365028   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365035   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365043   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365049   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365056   45138 command_runner.go:130] >     },
	I0703 23:39:07.365061   45138 command_runner.go:130] >     {
	I0703 23:39:07.365072   45138 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0703 23:39:07.365080   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365089   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0703 23:39:07.365098   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365103   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365116   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0703 23:39:07.365137   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0703 23:39:07.365145   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365151   45138 command_runner.go:130] >       "size": "61245718",
	I0703 23:39:07.365159   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365169   45138 command_runner.go:130] >       "username": "nonroot",
	I0703 23:39:07.365174   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365179   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365184   45138 command_runner.go:130] >     },
	I0703 23:39:07.365191   45138 command_runner.go:130] >     {
	I0703 23:39:07.365216   45138 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0703 23:39:07.365224   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365231   45138 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0703 23:39:07.365238   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365243   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365262   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0703 23:39:07.365274   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0703 23:39:07.365281   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365288   45138 command_runner.go:130] >       "size": "150779692",
	I0703 23:39:07.365295   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365304   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365313   45138 command_runner.go:130] >       },
	I0703 23:39:07.365321   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365329   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365337   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365344   45138 command_runner.go:130] >     },
	I0703 23:39:07.365349   45138 command_runner.go:130] >     {
	I0703 23:39:07.365366   45138 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0703 23:39:07.365376   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365388   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0703 23:39:07.365397   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365406   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365420   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0703 23:39:07.365434   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0703 23:39:07.365443   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365452   45138 command_runner.go:130] >       "size": "117609954",
	I0703 23:39:07.365461   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365469   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365477   45138 command_runner.go:130] >       },
	I0703 23:39:07.365485   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365494   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365503   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365511   45138 command_runner.go:130] >     },
	I0703 23:39:07.365519   45138 command_runner.go:130] >     {
	I0703 23:39:07.365530   45138 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0703 23:39:07.365539   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365547   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0703 23:39:07.365556   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365569   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365584   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0703 23:39:07.365598   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0703 23:39:07.365618   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365626   45138 command_runner.go:130] >       "size": "112194888",
	I0703 23:39:07.365630   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365638   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365644   45138 command_runner.go:130] >       },
	I0703 23:39:07.365652   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365661   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365670   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365679   45138 command_runner.go:130] >     },
	I0703 23:39:07.365687   45138 command_runner.go:130] >     {
	I0703 23:39:07.365698   45138 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0703 23:39:07.365706   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365715   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0703 23:39:07.365723   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365731   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365768   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0703 23:39:07.365783   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0703 23:39:07.365790   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365796   45138 command_runner.go:130] >       "size": "85953433",
	I0703 23:39:07.365804   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.365809   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365815   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365821   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365825   45138 command_runner.go:130] >     },
	I0703 23:39:07.365829   45138 command_runner.go:130] >     {
	I0703 23:39:07.365839   45138 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0703 23:39:07.365844   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365851   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0703 23:39:07.365856   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365861   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.365870   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0703 23:39:07.365879   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0703 23:39:07.365884   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365889   45138 command_runner.go:130] >       "size": "63051080",
	I0703 23:39:07.365894   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.365900   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.365912   45138 command_runner.go:130] >       },
	I0703 23:39:07.365921   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.365929   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.365936   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.365944   45138 command_runner.go:130] >     },
	I0703 23:39:07.365949   45138 command_runner.go:130] >     {
	I0703 23:39:07.365960   45138 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0703 23:39:07.365968   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.365977   45138 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0703 23:39:07.365985   45138 command_runner.go:130] >       ],
	I0703 23:39:07.365991   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.366003   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0703 23:39:07.366017   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0703 23:39:07.366025   45138 command_runner.go:130] >       ],
	I0703 23:39:07.366032   45138 command_runner.go:130] >       "size": "750414",
	I0703 23:39:07.366041   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.366047   45138 command_runner.go:130] >         "value": "65535"
	I0703 23:39:07.366055   45138 command_runner.go:130] >       },
	I0703 23:39:07.366060   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.366069   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.366078   45138 command_runner.go:130] >       "pinned": true
	I0703 23:39:07.366096   45138 command_runner.go:130] >     }
	I0703 23:39:07.366103   45138 command_runner.go:130] >   ]
	I0703 23:39:07.366106   45138 command_runner.go:130] > }
	I0703 23:39:07.366703   45138 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:39:07.366720   45138 crio.go:433] Images already preloaded, skipping extraction
	I0703 23:39:07.366771   45138 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:39:07.401716   45138 command_runner.go:130] > {
	I0703 23:39:07.401742   45138 command_runner.go:130] >   "images": [
	I0703 23:39:07.401748   45138 command_runner.go:130] >     {
	I0703 23:39:07.401759   45138 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0703 23:39:07.401767   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401776   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0703 23:39:07.401781   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401786   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.401798   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0703 23:39:07.401811   45138 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0703 23:39:07.401820   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401826   45138 command_runner.go:130] >       "size": "65908273",
	I0703 23:39:07.401834   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.401842   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.401860   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.401869   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.401876   45138 command_runner.go:130] >     },
	I0703 23:39:07.401881   45138 command_runner.go:130] >     {
	I0703 23:39:07.401894   45138 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0703 23:39:07.401903   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401909   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0703 23:39:07.401915   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401919   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.401928   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0703 23:39:07.401937   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0703 23:39:07.401942   45138 command_runner.go:130] >       ],
	I0703 23:39:07.401946   45138 command_runner.go:130] >       "size": "1363676",
	I0703 23:39:07.401952   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.401960   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.401966   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.401970   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.401976   45138 command_runner.go:130] >     },
	I0703 23:39:07.401979   45138 command_runner.go:130] >     {
	I0703 23:39:07.401987   45138 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0703 23:39:07.401991   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.401997   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0703 23:39:07.402012   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402018   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402037   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0703 23:39:07.402047   45138 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0703 23:39:07.402051   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402055   45138 command_runner.go:130] >       "size": "31470524",
	I0703 23:39:07.402059   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.402065   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402069   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402073   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402077   45138 command_runner.go:130] >     },
	I0703 23:39:07.402080   45138 command_runner.go:130] >     {
	I0703 23:39:07.402086   45138 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0703 23:39:07.402093   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402098   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0703 23:39:07.402103   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402107   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402116   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0703 23:39:07.402131   45138 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0703 23:39:07.402137   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402141   45138 command_runner.go:130] >       "size": "61245718",
	I0703 23:39:07.402147   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.402153   45138 command_runner.go:130] >       "username": "nonroot",
	I0703 23:39:07.402159   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402163   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402168   45138 command_runner.go:130] >     },
	I0703 23:39:07.402172   45138 command_runner.go:130] >     {
	I0703 23:39:07.402179   45138 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0703 23:39:07.402185   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402190   45138 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0703 23:39:07.402193   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402197   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402206   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0703 23:39:07.402215   45138 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0703 23:39:07.402221   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402225   45138 command_runner.go:130] >       "size": "150779692",
	I0703 23:39:07.402301   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402632   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402647   45138 command_runner.go:130] >       },
	I0703 23:39:07.402651   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402656   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402660   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402663   45138 command_runner.go:130] >     },
	I0703 23:39:07.402666   45138 command_runner.go:130] >     {
	I0703 23:39:07.402677   45138 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0703 23:39:07.402683   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402691   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0703 23:39:07.402696   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402712   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402727   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0703 23:39:07.402748   45138 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0703 23:39:07.402756   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402760   45138 command_runner.go:130] >       "size": "117609954",
	I0703 23:39:07.402767   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402772   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402781   45138 command_runner.go:130] >       },
	I0703 23:39:07.402788   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402803   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.402810   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.402819   45138 command_runner.go:130] >     },
	I0703 23:39:07.402825   45138 command_runner.go:130] >     {
	I0703 23:39:07.402836   45138 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0703 23:39:07.402851   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.402863   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0703 23:39:07.402869   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402879   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.402897   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0703 23:39:07.402913   45138 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0703 23:39:07.402925   45138 command_runner.go:130] >       ],
	I0703 23:39:07.402940   45138 command_runner.go:130] >       "size": "112194888",
	I0703 23:39:07.402948   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.402954   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.402981   45138 command_runner.go:130] >       },
	I0703 23:39:07.402991   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.402998   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403013   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403022   45138 command_runner.go:130] >     },
	I0703 23:39:07.403028   45138 command_runner.go:130] >     {
	I0703 23:39:07.403041   45138 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0703 23:39:07.403050   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403064   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0703 23:39:07.403073   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403082   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403247   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0703 23:39:07.403300   45138 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0703 23:39:07.403309   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403319   45138 command_runner.go:130] >       "size": "85953433",
	I0703 23:39:07.403329   45138 command_runner.go:130] >       "uid": null,
	I0703 23:39:07.403346   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403354   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403366   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403373   45138 command_runner.go:130] >     },
	I0703 23:39:07.403386   45138 command_runner.go:130] >     {
	I0703 23:39:07.403396   45138 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0703 23:39:07.403413   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403430   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0703 23:39:07.403436   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403443   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403461   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0703 23:39:07.403476   45138 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0703 23:39:07.403485   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403496   45138 command_runner.go:130] >       "size": "63051080",
	I0703 23:39:07.403502   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.403509   45138 command_runner.go:130] >         "value": "0"
	I0703 23:39:07.403517   45138 command_runner.go:130] >       },
	I0703 23:39:07.403524   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403534   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403545   45138 command_runner.go:130] >       "pinned": false
	I0703 23:39:07.403564   45138 command_runner.go:130] >     },
	I0703 23:39:07.403573   45138 command_runner.go:130] >     {
	I0703 23:39:07.403589   45138 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0703 23:39:07.403624   45138 command_runner.go:130] >       "repoTags": [
	I0703 23:39:07.403640   45138 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0703 23:39:07.403647   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403655   45138 command_runner.go:130] >       "repoDigests": [
	I0703 23:39:07.403671   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0703 23:39:07.403695   45138 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0703 23:39:07.403702   45138 command_runner.go:130] >       ],
	I0703 23:39:07.403709   45138 command_runner.go:130] >       "size": "750414",
	I0703 23:39:07.403721   45138 command_runner.go:130] >       "uid": {
	I0703 23:39:07.403736   45138 command_runner.go:130] >         "value": "65535"
	I0703 23:39:07.403746   45138 command_runner.go:130] >       },
	I0703 23:39:07.403754   45138 command_runner.go:130] >       "username": "",
	I0703 23:39:07.403764   45138 command_runner.go:130] >       "spec": null,
	I0703 23:39:07.403775   45138 command_runner.go:130] >       "pinned": true
	I0703 23:39:07.403784   45138 command_runner.go:130] >     }
	I0703 23:39:07.403795   45138 command_runner.go:130] >   ]
	I0703 23:39:07.403801   45138 command_runner.go:130] > }
	I0703 23:39:07.404217   45138 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:39:07.404234   45138 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:39:07.404241   45138 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.30.2 crio true true} ...
	I0703 23:39:07.404351   45138 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-184661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:39:07.404413   45138 ssh_runner.go:195] Run: crio config
	I0703 23:39:07.448562   45138 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0703 23:39:07.448590   45138 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0703 23:39:07.448597   45138 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0703 23:39:07.448600   45138 command_runner.go:130] > #
	I0703 23:39:07.448608   45138 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0703 23:39:07.448613   45138 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0703 23:39:07.448619   45138 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0703 23:39:07.448631   45138 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0703 23:39:07.448635   45138 command_runner.go:130] > # reload'.
	I0703 23:39:07.448640   45138 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0703 23:39:07.448646   45138 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0703 23:39:07.448652   45138 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0703 23:39:07.448658   45138 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0703 23:39:07.448661   45138 command_runner.go:130] > [crio]
	I0703 23:39:07.448666   45138 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0703 23:39:07.448674   45138 command_runner.go:130] > # containers images, in this directory.
	I0703 23:39:07.448776   45138 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0703 23:39:07.448806   45138 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0703 23:39:07.448931   45138 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0703 23:39:07.448955   45138 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0703 23:39:07.449193   45138 command_runner.go:130] > # imagestore = ""
	I0703 23:39:07.449210   45138 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0703 23:39:07.449219   45138 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0703 23:39:07.449331   45138 command_runner.go:130] > storage_driver = "overlay"
	I0703 23:39:07.449348   45138 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0703 23:39:07.449357   45138 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0703 23:39:07.449364   45138 command_runner.go:130] > storage_option = [
	I0703 23:39:07.449541   45138 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0703 23:39:07.449589   45138 command_runner.go:130] > ]
	I0703 23:39:07.449603   45138 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0703 23:39:07.449616   45138 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0703 23:39:07.449889   45138 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0703 23:39:07.449904   45138 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0703 23:39:07.449915   45138 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0703 23:39:07.449923   45138 command_runner.go:130] > # always happen on a node reboot
	I0703 23:39:07.450205   45138 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0703 23:39:07.450225   45138 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0703 23:39:07.450235   45138 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0703 23:39:07.450243   45138 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0703 23:39:07.450372   45138 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0703 23:39:07.450387   45138 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0703 23:39:07.450399   45138 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0703 23:39:07.450720   45138 command_runner.go:130] > # internal_wipe = true
	I0703 23:39:07.450739   45138 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0703 23:39:07.450748   45138 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0703 23:39:07.451006   45138 command_runner.go:130] > # internal_repair = false
	I0703 23:39:07.451019   45138 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0703 23:39:07.451029   45138 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0703 23:39:07.451038   45138 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0703 23:39:07.451288   45138 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0703 23:39:07.451303   45138 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0703 23:39:07.451309   45138 command_runner.go:130] > [crio.api]
	I0703 23:39:07.451317   45138 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0703 23:39:07.451679   45138 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0703 23:39:07.451708   45138 command_runner.go:130] > # IP address on which the stream server will listen.
	I0703 23:39:07.451961   45138 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0703 23:39:07.451978   45138 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0703 23:39:07.451986   45138 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0703 23:39:07.452185   45138 command_runner.go:130] > # stream_port = "0"
	I0703 23:39:07.452199   45138 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0703 23:39:07.452414   45138 command_runner.go:130] > # stream_enable_tls = false
	I0703 23:39:07.452429   45138 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0703 23:39:07.452662   45138 command_runner.go:130] > # stream_idle_timeout = ""
	I0703 23:39:07.452677   45138 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0703 23:39:07.452686   45138 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0703 23:39:07.452692   45138 command_runner.go:130] > # minutes.
	I0703 23:39:07.452854   45138 command_runner.go:130] > # stream_tls_cert = ""
	I0703 23:39:07.452866   45138 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0703 23:39:07.452871   45138 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0703 23:39:07.453018   45138 command_runner.go:130] > # stream_tls_key = ""
	I0703 23:39:07.453033   45138 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0703 23:39:07.453042   45138 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0703 23:39:07.453066   45138 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0703 23:39:07.453226   45138 command_runner.go:130] > # stream_tls_ca = ""
	I0703 23:39:07.453242   45138 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0703 23:39:07.453383   45138 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0703 23:39:07.453399   45138 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0703 23:39:07.453689   45138 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0703 23:39:07.453707   45138 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0703 23:39:07.453718   45138 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0703 23:39:07.453724   45138 command_runner.go:130] > [crio.runtime]
	I0703 23:39:07.453735   45138 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0703 23:39:07.453746   45138 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0703 23:39:07.453757   45138 command_runner.go:130] > # "nofile=1024:2048"
	I0703 23:39:07.453767   45138 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0703 23:39:07.453820   45138 command_runner.go:130] > # default_ulimits = [
	I0703 23:39:07.453975   45138 command_runner.go:130] > # ]
	I0703 23:39:07.453992   45138 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0703 23:39:07.454289   45138 command_runner.go:130] > # no_pivot = false
	I0703 23:39:07.454303   45138 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0703 23:39:07.454313   45138 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0703 23:39:07.454628   45138 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0703 23:39:07.454655   45138 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0703 23:39:07.454664   45138 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0703 23:39:07.454676   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0703 23:39:07.454687   45138 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0703 23:39:07.454695   45138 command_runner.go:130] > # Cgroup setting for conmon
	I0703 23:39:07.454706   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0703 23:39:07.454716   45138 command_runner.go:130] > conmon_cgroup = "pod"
	I0703 23:39:07.454726   45138 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0703 23:39:07.454737   45138 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0703 23:39:07.454747   45138 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0703 23:39:07.454756   45138 command_runner.go:130] > conmon_env = [
	I0703 23:39:07.454763   45138 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0703 23:39:07.454770   45138 command_runner.go:130] > ]
	I0703 23:39:07.454778   45138 command_runner.go:130] > # Additional environment variables to set for all the
	I0703 23:39:07.454790   45138 command_runner.go:130] > # containers. These are overridden if set in the
	I0703 23:39:07.454801   45138 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0703 23:39:07.454809   45138 command_runner.go:130] > # default_env = [
	I0703 23:39:07.454814   45138 command_runner.go:130] > # ]
	I0703 23:39:07.454827   45138 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0703 23:39:07.454839   45138 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0703 23:39:07.454847   45138 command_runner.go:130] > # selinux = false
	I0703 23:39:07.454853   45138 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0703 23:39:07.454866   45138 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0703 23:39:07.454879   45138 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0703 23:39:07.454888   45138 command_runner.go:130] > # seccomp_profile = ""
	I0703 23:39:07.454896   45138 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0703 23:39:07.454908   45138 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0703 23:39:07.454920   45138 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0703 23:39:07.454930   45138 command_runner.go:130] > # which might increase security.
	I0703 23:39:07.454937   45138 command_runner.go:130] > # This option is currently deprecated,
	I0703 23:39:07.454948   45138 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0703 23:39:07.454959   45138 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0703 23:39:07.454970   45138 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0703 23:39:07.454984   45138 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0703 23:39:07.454994   45138 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0703 23:39:07.455006   45138 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0703 23:39:07.455013   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.455028   45138 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0703 23:39:07.455040   45138 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0703 23:39:07.455049   45138 command_runner.go:130] > # the cgroup blockio controller.
	I0703 23:39:07.455057   45138 command_runner.go:130] > # blockio_config_file = ""
	I0703 23:39:07.455068   45138 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0703 23:39:07.455078   45138 command_runner.go:130] > # blockio parameters.
	I0703 23:39:07.455086   45138 command_runner.go:130] > # blockio_reload = false
	I0703 23:39:07.455099   45138 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0703 23:39:07.455105   45138 command_runner.go:130] > # irqbalance daemon.
	I0703 23:39:07.455111   45138 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0703 23:39:07.455123   45138 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0703 23:39:07.455137   45138 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0703 23:39:07.455151   45138 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0703 23:39:07.455160   45138 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0703 23:39:07.455174   45138 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0703 23:39:07.455183   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.455190   45138 command_runner.go:130] > # rdt_config_file = ""
	I0703 23:39:07.455199   45138 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0703 23:39:07.455210   45138 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0703 23:39:07.455247   45138 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0703 23:39:07.455258   45138 command_runner.go:130] > # separate_pull_cgroup = ""
	I0703 23:39:07.455268   45138 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0703 23:39:07.455278   45138 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0703 23:39:07.455288   45138 command_runner.go:130] > # will be added.
	I0703 23:39:07.455295   45138 command_runner.go:130] > # default_capabilities = [
	I0703 23:39:07.455302   45138 command_runner.go:130] > # 	"CHOWN",
	I0703 23:39:07.455309   45138 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0703 23:39:07.455317   45138 command_runner.go:130] > # 	"FSETID",
	I0703 23:39:07.455323   45138 command_runner.go:130] > # 	"FOWNER",
	I0703 23:39:07.455328   45138 command_runner.go:130] > # 	"SETGID",
	I0703 23:39:07.455333   45138 command_runner.go:130] > # 	"SETUID",
	I0703 23:39:07.455339   45138 command_runner.go:130] > # 	"SETPCAP",
	I0703 23:39:07.455343   45138 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0703 23:39:07.455349   45138 command_runner.go:130] > # 	"KILL",
	I0703 23:39:07.455357   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455369   45138 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0703 23:39:07.455381   45138 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0703 23:39:07.455387   45138 command_runner.go:130] > # add_inheritable_capabilities = false
	I0703 23:39:07.455400   45138 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0703 23:39:07.455411   45138 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0703 23:39:07.455421   45138 command_runner.go:130] > default_sysctls = [
	I0703 23:39:07.455432   45138 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0703 23:39:07.455439   45138 command_runner.go:130] > ]
	I0703 23:39:07.455447   45138 command_runner.go:130] > # List of devices on the host that a
	I0703 23:39:07.455459   45138 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0703 23:39:07.455465   45138 command_runner.go:130] > # allowed_devices = [
	I0703 23:39:07.455471   45138 command_runner.go:130] > # 	"/dev/fuse",
	I0703 23:39:07.455474   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455479   45138 command_runner.go:130] > # List of additional devices. specified as
	I0703 23:39:07.455485   45138 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0703 23:39:07.455494   45138 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0703 23:39:07.455502   45138 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0703 23:39:07.455506   45138 command_runner.go:130] > # additional_devices = [
	I0703 23:39:07.455510   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455515   45138 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0703 23:39:07.455521   45138 command_runner.go:130] > # cdi_spec_dirs = [
	I0703 23:39:07.455525   45138 command_runner.go:130] > # 	"/etc/cdi",
	I0703 23:39:07.455535   45138 command_runner.go:130] > # 	"/var/run/cdi",
	I0703 23:39:07.455543   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455554   45138 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0703 23:39:07.455566   45138 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0703 23:39:07.455573   45138 command_runner.go:130] > # Defaults to false.
	I0703 23:39:07.455581   45138 command_runner.go:130] > # device_ownership_from_security_context = false
	I0703 23:39:07.455594   45138 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0703 23:39:07.455603   45138 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0703 23:39:07.455608   45138 command_runner.go:130] > # hooks_dir = [
	I0703 23:39:07.455616   45138 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0703 23:39:07.455624   45138 command_runner.go:130] > # ]
	I0703 23:39:07.455633   45138 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0703 23:39:07.455647   45138 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0703 23:39:07.455659   45138 command_runner.go:130] > # its default mounts from the following two files:
	I0703 23:39:07.455666   45138 command_runner.go:130] > #
	I0703 23:39:07.455676   45138 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0703 23:39:07.455689   45138 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0703 23:39:07.455701   45138 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0703 23:39:07.455710   45138 command_runner.go:130] > #
	I0703 23:39:07.455721   45138 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0703 23:39:07.455734   45138 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0703 23:39:07.455746   45138 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0703 23:39:07.455757   45138 command_runner.go:130] > #      only add mounts it finds in this file.
	I0703 23:39:07.455762   45138 command_runner.go:130] > #
	I0703 23:39:07.455771   45138 command_runner.go:130] > # default_mounts_file = ""
	I0703 23:39:07.455779   45138 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0703 23:39:07.455796   45138 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0703 23:39:07.455802   45138 command_runner.go:130] > pids_limit = 1024
	I0703 23:39:07.455814   45138 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0703 23:39:07.455827   45138 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0703 23:39:07.455838   45138 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0703 23:39:07.455853   45138 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0703 23:39:07.455858   45138 command_runner.go:130] > # log_size_max = -1
	I0703 23:39:07.455865   45138 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0703 23:39:07.455883   45138 command_runner.go:130] > # log_to_journald = false
	I0703 23:39:07.455896   45138 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0703 23:39:07.455915   45138 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0703 23:39:07.455926   45138 command_runner.go:130] > # Path to directory for container attach sockets.
	I0703 23:39:07.455936   45138 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0703 23:39:07.455943   45138 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0703 23:39:07.455951   45138 command_runner.go:130] > # bind_mount_prefix = ""
	I0703 23:39:07.455959   45138 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0703 23:39:07.455968   45138 command_runner.go:130] > # read_only = false
	I0703 23:39:07.455979   45138 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0703 23:39:07.455991   45138 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0703 23:39:07.456001   45138 command_runner.go:130] > # live configuration reload.
	I0703 23:39:07.456007   45138 command_runner.go:130] > # log_level = "info"
	I0703 23:39:07.456017   45138 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0703 23:39:07.456028   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.456036   45138 command_runner.go:130] > # log_filter = ""
	I0703 23:39:07.456047   45138 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0703 23:39:07.456061   45138 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0703 23:39:07.456067   45138 command_runner.go:130] > # separated by comma.
	I0703 23:39:07.456082   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456091   45138 command_runner.go:130] > # uid_mappings = ""
	I0703 23:39:07.456101   45138 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0703 23:39:07.456113   45138 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0703 23:39:07.456122   45138 command_runner.go:130] > # separated by comma.
	I0703 23:39:07.456134   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456143   45138 command_runner.go:130] > # gid_mappings = ""
	I0703 23:39:07.456153   45138 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0703 23:39:07.456164   45138 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0703 23:39:07.456179   45138 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0703 23:39:07.456193   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456199   45138 command_runner.go:130] > # minimum_mappable_uid = -1
	I0703 23:39:07.456208   45138 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0703 23:39:07.456221   45138 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0703 23:39:07.456234   45138 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0703 23:39:07.456249   45138 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0703 23:39:07.456258   45138 command_runner.go:130] > # minimum_mappable_gid = -1
	I0703 23:39:07.456268   45138 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0703 23:39:07.456281   45138 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0703 23:39:07.456301   45138 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0703 23:39:07.456311   45138 command_runner.go:130] > # ctr_stop_timeout = 30
	I0703 23:39:07.456320   45138 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0703 23:39:07.456329   45138 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0703 23:39:07.456334   45138 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0703 23:39:07.456341   45138 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0703 23:39:07.456345   45138 command_runner.go:130] > drop_infra_ctr = false
	I0703 23:39:07.456355   45138 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0703 23:39:07.456366   45138 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0703 23:39:07.456380   45138 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0703 23:39:07.456388   45138 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0703 23:39:07.456397   45138 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0703 23:39:07.456409   45138 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0703 23:39:07.456421   45138 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0703 23:39:07.456510   45138 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0703 23:39:07.456522   45138 command_runner.go:130] > # shared_cpuset = ""
	I0703 23:39:07.456534   45138 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0703 23:39:07.456545   45138 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0703 23:39:07.456552   45138 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0703 23:39:07.456568   45138 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0703 23:39:07.456577   45138 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0703 23:39:07.456587   45138 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0703 23:39:07.456599   45138 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0703 23:39:07.456609   45138 command_runner.go:130] > # enable_criu_support = false
	I0703 23:39:07.456617   45138 command_runner.go:130] > # Enable/disable the generation of the container,
	I0703 23:39:07.456636   45138 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0703 23:39:07.456647   45138 command_runner.go:130] > # enable_pod_events = false
	I0703 23:39:07.456657   45138 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0703 23:39:07.456670   45138 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0703 23:39:07.456681   45138 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0703 23:39:07.456691   45138 command_runner.go:130] > # default_runtime = "runc"
	I0703 23:39:07.456699   45138 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0703 23:39:07.456709   45138 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0703 23:39:07.456725   45138 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0703 23:39:07.456737   45138 command_runner.go:130] > # creation as a file is not desired either.
	I0703 23:39:07.456770   45138 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0703 23:39:07.456788   45138 command_runner.go:130] > # the hostname is being managed dynamically.
	I0703 23:39:07.456799   45138 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0703 23:39:07.456806   45138 command_runner.go:130] > # ]
	I0703 23:39:07.456818   45138 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0703 23:39:07.456831   45138 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0703 23:39:07.456843   45138 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0703 23:39:07.456853   45138 command_runner.go:130] > # Each entry in the table should follow the format:
	I0703 23:39:07.456861   45138 command_runner.go:130] > #
	I0703 23:39:07.456868   45138 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0703 23:39:07.456878   45138 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0703 23:39:07.456936   45138 command_runner.go:130] > # runtime_type = "oci"
	I0703 23:39:07.456947   45138 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0703 23:39:07.456958   45138 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0703 23:39:07.456966   45138 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0703 23:39:07.456973   45138 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0703 23:39:07.456979   45138 command_runner.go:130] > # monitor_env = []
	I0703 23:39:07.456989   45138 command_runner.go:130] > # privileged_without_host_devices = false
	I0703 23:39:07.457000   45138 command_runner.go:130] > # allowed_annotations = []
	I0703 23:39:07.457011   45138 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0703 23:39:07.457019   45138 command_runner.go:130] > # Where:
	I0703 23:39:07.457030   45138 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0703 23:39:07.457042   45138 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0703 23:39:07.457054   45138 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0703 23:39:07.457063   45138 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0703 23:39:07.457071   45138 command_runner.go:130] > #   in $PATH.
	I0703 23:39:07.457093   45138 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0703 23:39:07.457106   45138 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0703 23:39:07.457127   45138 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0703 23:39:07.457136   45138 command_runner.go:130] > #   state.
	I0703 23:39:07.457143   45138 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0703 23:39:07.457154   45138 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0703 23:39:07.457173   45138 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0703 23:39:07.457185   45138 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0703 23:39:07.457197   45138 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0703 23:39:07.457210   45138 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0703 23:39:07.457221   45138 command_runner.go:130] > #   The currently recognized values are:
	I0703 23:39:07.457237   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0703 23:39:07.457255   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0703 23:39:07.457268   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0703 23:39:07.457280   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0703 23:39:07.457295   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0703 23:39:07.457308   45138 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0703 23:39:07.457317   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0703 23:39:07.457329   45138 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0703 23:39:07.457343   45138 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0703 23:39:07.457356   45138 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0703 23:39:07.457366   45138 command_runner.go:130] > #   deprecated option "conmon".
	I0703 23:39:07.457380   45138 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0703 23:39:07.457391   45138 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0703 23:39:07.457405   45138 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0703 23:39:07.457415   45138 command_runner.go:130] > #   should be moved to the container's cgroup
	I0703 23:39:07.457425   45138 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0703 23:39:07.457437   45138 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0703 23:39:07.457451   45138 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0703 23:39:07.457462   45138 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0703 23:39:07.457470   45138 command_runner.go:130] > #
	I0703 23:39:07.457477   45138 command_runner.go:130] > # Using the seccomp notifier feature:
	I0703 23:39:07.457484   45138 command_runner.go:130] > #
	I0703 23:39:07.457492   45138 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0703 23:39:07.457501   45138 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0703 23:39:07.457514   45138 command_runner.go:130] > #
	I0703 23:39:07.457528   45138 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0703 23:39:07.457541   45138 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0703 23:39:07.457549   45138 command_runner.go:130] > #
	I0703 23:39:07.457559   45138 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0703 23:39:07.457567   45138 command_runner.go:130] > # feature.
	I0703 23:39:07.457573   45138 command_runner.go:130] > #
	I0703 23:39:07.457580   45138 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0703 23:39:07.457590   45138 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0703 23:39:07.457603   45138 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0703 23:39:07.457618   45138 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0703 23:39:07.457631   45138 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0703 23:39:07.457644   45138 command_runner.go:130] > #
	I0703 23:39:07.457656   45138 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0703 23:39:07.457665   45138 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0703 23:39:07.457672   45138 command_runner.go:130] > #
	I0703 23:39:07.457681   45138 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0703 23:39:07.457693   45138 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0703 23:39:07.457701   45138 command_runner.go:130] > #
	I0703 23:39:07.457712   45138 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0703 23:39:07.457724   45138 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0703 23:39:07.457732   45138 command_runner.go:130] > # limitation.
	I0703 23:39:07.457740   45138 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0703 23:39:07.457748   45138 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0703 23:39:07.457755   45138 command_runner.go:130] > runtime_type = "oci"
	I0703 23:39:07.457761   45138 command_runner.go:130] > runtime_root = "/run/runc"
	I0703 23:39:07.457769   45138 command_runner.go:130] > runtime_config_path = ""
	I0703 23:39:07.457780   45138 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0703 23:39:07.457788   45138 command_runner.go:130] > monitor_cgroup = "pod"
	I0703 23:39:07.457797   45138 command_runner.go:130] > monitor_exec_cgroup = ""
	I0703 23:39:07.457806   45138 command_runner.go:130] > monitor_env = [
	I0703 23:39:07.457818   45138 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0703 23:39:07.457823   45138 command_runner.go:130] > ]
	I0703 23:39:07.457832   45138 command_runner.go:130] > privileged_without_host_devices = false
	I0703 23:39:07.457839   45138 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0703 23:39:07.457849   45138 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0703 23:39:07.457862   45138 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0703 23:39:07.457877   45138 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0703 23:39:07.457892   45138 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0703 23:39:07.457904   45138 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0703 23:39:07.457920   45138 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0703 23:39:07.457940   45138 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0703 23:39:07.457953   45138 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0703 23:39:07.457966   45138 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0703 23:39:07.457979   45138 command_runner.go:130] > # Example:
	I0703 23:39:07.457989   45138 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0703 23:39:07.457999   45138 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0703 23:39:07.458011   45138 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0703 23:39:07.458029   45138 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0703 23:39:07.458038   45138 command_runner.go:130] > # cpuset = 0
	I0703 23:39:07.458047   45138 command_runner.go:130] > # cpushares = "0-1"
	I0703 23:39:07.458055   45138 command_runner.go:130] > # Where:
	I0703 23:39:07.458066   45138 command_runner.go:130] > # The workload name is workload-type.
	I0703 23:39:07.458083   45138 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0703 23:39:07.458095   45138 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0703 23:39:07.458107   45138 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0703 23:39:07.458122   45138 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0703 23:39:07.458134   45138 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0703 23:39:07.458146   45138 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0703 23:39:07.458157   45138 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0703 23:39:07.458187   45138 command_runner.go:130] > # Default value is set to true
	I0703 23:39:07.458195   45138 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0703 23:39:07.458202   45138 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0703 23:39:07.458209   45138 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0703 23:39:07.458219   45138 command_runner.go:130] > # Default value is set to 'false'
	I0703 23:39:07.458230   45138 command_runner.go:130] > # disable_hostport_mapping = false
	I0703 23:39:07.458244   45138 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0703 23:39:07.458251   45138 command_runner.go:130] > #
	I0703 23:39:07.458264   45138 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0703 23:39:07.458277   45138 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0703 23:39:07.458286   45138 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0703 23:39:07.458295   45138 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0703 23:39:07.458304   45138 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0703 23:39:07.458310   45138 command_runner.go:130] > [crio.image]
	I0703 23:39:07.458319   45138 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0703 23:39:07.458326   45138 command_runner.go:130] > # default_transport = "docker://"
	I0703 23:39:07.458339   45138 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0703 23:39:07.458349   45138 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0703 23:39:07.458356   45138 command_runner.go:130] > # global_auth_file = ""
	I0703 23:39:07.458363   45138 command_runner.go:130] > # The image used to instantiate infra containers.
	I0703 23:39:07.458368   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.458374   45138 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0703 23:39:07.458384   45138 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0703 23:39:07.458394   45138 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0703 23:39:07.458409   45138 command_runner.go:130] > # This option supports live configuration reload.
	I0703 23:39:07.458416   45138 command_runner.go:130] > # pause_image_auth_file = ""
	I0703 23:39:07.458425   45138 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0703 23:39:07.458434   45138 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0703 23:39:07.458444   45138 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0703 23:39:07.458450   45138 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0703 23:39:07.458454   45138 command_runner.go:130] > # pause_command = "/pause"
	I0703 23:39:07.458461   45138 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0703 23:39:07.458471   45138 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0703 23:39:07.458480   45138 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0703 23:39:07.458493   45138 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0703 23:39:07.458503   45138 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0703 23:39:07.458515   45138 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0703 23:39:07.458524   45138 command_runner.go:130] > # pinned_images = [
	I0703 23:39:07.458532   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458539   45138 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0703 23:39:07.458550   45138 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0703 23:39:07.458564   45138 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0703 23:39:07.458577   45138 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0703 23:39:07.458587   45138 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0703 23:39:07.458596   45138 command_runner.go:130] > # signature_policy = ""
	I0703 23:39:07.458604   45138 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0703 23:39:07.458618   45138 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0703 23:39:07.458627   45138 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0703 23:39:07.458639   45138 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0703 23:39:07.458651   45138 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0703 23:39:07.458662   45138 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0703 23:39:07.458677   45138 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0703 23:39:07.458690   45138 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0703 23:39:07.458701   45138 command_runner.go:130] > # changing them here.
	I0703 23:39:07.458709   45138 command_runner.go:130] > # insecure_registries = [
	I0703 23:39:07.458712   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458725   45138 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0703 23:39:07.458737   45138 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0703 23:39:07.458746   45138 command_runner.go:130] > # image_volumes = "mkdir"
	I0703 23:39:07.458758   45138 command_runner.go:130] > # Temporary directory to use for storing big files
	I0703 23:39:07.458776   45138 command_runner.go:130] > # big_files_temporary_dir = ""
	I0703 23:39:07.458788   45138 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0703 23:39:07.458795   45138 command_runner.go:130] > # CNI plugins.
	I0703 23:39:07.458799   45138 command_runner.go:130] > [crio.network]
	I0703 23:39:07.458810   45138 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0703 23:39:07.458822   45138 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0703 23:39:07.458829   45138 command_runner.go:130] > # cni_default_network = ""
	I0703 23:39:07.458841   45138 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0703 23:39:07.458851   45138 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0703 23:39:07.458863   45138 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0703 23:39:07.458875   45138 command_runner.go:130] > # plugin_dirs = [
	I0703 23:39:07.458882   45138 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0703 23:39:07.458887   45138 command_runner.go:130] > # ]
	I0703 23:39:07.458898   45138 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0703 23:39:07.458912   45138 command_runner.go:130] > [crio.metrics]
	I0703 23:39:07.458923   45138 command_runner.go:130] > # Globally enable or disable metrics support.
	I0703 23:39:07.458931   45138 command_runner.go:130] > enable_metrics = true
	I0703 23:39:07.458942   45138 command_runner.go:130] > # Specify enabled metrics collectors.
	I0703 23:39:07.458952   45138 command_runner.go:130] > # Per default all metrics are enabled.
	I0703 23:39:07.458962   45138 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0703 23:39:07.458970   45138 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0703 23:39:07.458978   45138 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0703 23:39:07.458984   45138 command_runner.go:130] > # metrics_collectors = [
	I0703 23:39:07.458987   45138 command_runner.go:130] > # 	"operations",
	I0703 23:39:07.458997   45138 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0703 23:39:07.459004   45138 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0703 23:39:07.459014   45138 command_runner.go:130] > # 	"operations_errors",
	I0703 23:39:07.459023   45138 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0703 23:39:07.459033   45138 command_runner.go:130] > # 	"image_pulls_by_name",
	I0703 23:39:07.459043   45138 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0703 23:39:07.459052   45138 command_runner.go:130] > # 	"image_pulls_failures",
	I0703 23:39:07.459061   45138 command_runner.go:130] > # 	"image_pulls_successes",
	I0703 23:39:07.459069   45138 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0703 23:39:07.459073   45138 command_runner.go:130] > # 	"image_layer_reuse",
	I0703 23:39:07.459079   45138 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0703 23:39:07.459086   45138 command_runner.go:130] > # 	"containers_oom_total",
	I0703 23:39:07.459099   45138 command_runner.go:130] > # 	"containers_oom",
	I0703 23:39:07.459105   45138 command_runner.go:130] > # 	"processes_defunct",
	I0703 23:39:07.459109   45138 command_runner.go:130] > # 	"operations_total",
	I0703 23:39:07.459117   45138 command_runner.go:130] > # 	"operations_latency_seconds",
	I0703 23:39:07.459121   45138 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0703 23:39:07.459127   45138 command_runner.go:130] > # 	"operations_errors_total",
	I0703 23:39:07.459131   45138 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0703 23:39:07.459137   45138 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0703 23:39:07.459146   45138 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0703 23:39:07.459156   45138 command_runner.go:130] > # 	"image_pulls_success_total",
	I0703 23:39:07.459166   45138 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0703 23:39:07.459180   45138 command_runner.go:130] > # 	"containers_oom_count_total",
	I0703 23:39:07.459191   45138 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0703 23:39:07.459200   45138 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0703 23:39:07.459208   45138 command_runner.go:130] > # ]
	I0703 23:39:07.459215   45138 command_runner.go:130] > # The port on which the metrics server will listen.
	I0703 23:39:07.459221   45138 command_runner.go:130] > # metrics_port = 9090
	I0703 23:39:07.459226   45138 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0703 23:39:07.459232   45138 command_runner.go:130] > # metrics_socket = ""
	I0703 23:39:07.459237   45138 command_runner.go:130] > # The certificate for the secure metrics server.
	I0703 23:39:07.459245   45138 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0703 23:39:07.459254   45138 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0703 23:39:07.459261   45138 command_runner.go:130] > # certificate on any modification event.
	I0703 23:39:07.459265   45138 command_runner.go:130] > # metrics_cert = ""
	I0703 23:39:07.459273   45138 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0703 23:39:07.459284   45138 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0703 23:39:07.459290   45138 command_runner.go:130] > # metrics_key = ""
	I0703 23:39:07.459295   45138 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0703 23:39:07.459302   45138 command_runner.go:130] > [crio.tracing]
	I0703 23:39:07.459307   45138 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0703 23:39:07.459313   45138 command_runner.go:130] > # enable_tracing = false
	I0703 23:39:07.459319   45138 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0703 23:39:07.459325   45138 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0703 23:39:07.459331   45138 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0703 23:39:07.459338   45138 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0703 23:39:07.459342   45138 command_runner.go:130] > # CRI-O NRI configuration.
	I0703 23:39:07.459355   45138 command_runner.go:130] > [crio.nri]
	I0703 23:39:07.459362   45138 command_runner.go:130] > # Globally enable or disable NRI.
	I0703 23:39:07.459368   45138 command_runner.go:130] > # enable_nri = false
	I0703 23:39:07.459378   45138 command_runner.go:130] > # NRI socket to listen on.
	I0703 23:39:07.459390   45138 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0703 23:39:07.459397   45138 command_runner.go:130] > # NRI plugin directory to use.
	I0703 23:39:07.459402   45138 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0703 23:39:07.459411   45138 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0703 23:39:07.459416   45138 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0703 23:39:07.459423   45138 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0703 23:39:07.459429   45138 command_runner.go:130] > # nri_disable_connections = false
	I0703 23:39:07.459435   45138 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0703 23:39:07.459440   45138 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0703 23:39:07.459447   45138 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0703 23:39:07.459452   45138 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0703 23:39:07.459457   45138 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0703 23:39:07.459463   45138 command_runner.go:130] > [crio.stats]
	I0703 23:39:07.459468   45138 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0703 23:39:07.459475   45138 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0703 23:39:07.459480   45138 command_runner.go:130] > # stats_collection_period = 0
	I0703 23:39:07.459512   45138 command_runner.go:130] ! time="2024-07-03 23:39:07.417534095Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0703 23:39:07.459527   45138 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0703 23:39:07.459631   45138 cni.go:84] Creating CNI manager for ""
	I0703 23:39:07.459639   45138 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0703 23:39:07.459647   45138 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:39:07.459676   45138 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-184661 NodeName:multinode-184661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:39:07.459833   45138 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-184661"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:39:07.459978   45138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:39:07.470855   45138 command_runner.go:130] > kubeadm
	I0703 23:39:07.470884   45138 command_runner.go:130] > kubectl
	I0703 23:39:07.470888   45138 command_runner.go:130] > kubelet
	I0703 23:39:07.470910   45138 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:39:07.470959   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:39:07.481273   45138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0703 23:39:07.500296   45138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:39:07.518906   45138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0703 23:39:07.538056   45138 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0703 23:39:07.542340   45138 command_runner.go:130] > 192.168.39.57	control-plane.minikube.internal
	I0703 23:39:07.542432   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:39:07.691341   45138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:39:07.708293   45138 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661 for IP: 192.168.39.57
	I0703 23:39:07.708317   45138 certs.go:194] generating shared ca certs ...
	I0703 23:39:07.708341   45138 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:39:07.708484   45138 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:39:07.708519   45138 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:39:07.708528   45138 certs.go:256] generating profile certs ...
	I0703 23:39:07.708614   45138 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/client.key
	I0703 23:39:07.708670   45138 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key.5a180a79
	I0703 23:39:07.708703   45138 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key
	I0703 23:39:07.708713   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0703 23:39:07.708727   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0703 23:39:07.708740   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0703 23:39:07.708752   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0703 23:39:07.708764   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0703 23:39:07.708776   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0703 23:39:07.708789   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0703 23:39:07.708802   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0703 23:39:07.708853   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:39:07.708880   45138 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:39:07.708889   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:39:07.708911   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:39:07.708933   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:39:07.708953   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:39:07.708991   45138 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:39:07.709019   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem -> /usr/share/ca-certificates/16574.pem
	I0703 23:39:07.709032   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> /usr/share/ca-certificates/165742.pem
	I0703 23:39:07.709045   45138 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:07.709602   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:39:07.737821   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:39:07.764513   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:39:07.791135   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:39:07.818515   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 23:39:07.845810   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:39:07.872860   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:39:07.899330   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/multinode-184661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:39:07.925389   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:39:07.950986   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:39:07.976269   45138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:39:08.001548   45138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:39:08.019375   45138 ssh_runner.go:195] Run: openssl version
	I0703 23:39:08.025753   45138 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0703 23:39:08.026020   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:39:08.038478   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043228   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043320   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.043370   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:39:08.049159   45138 command_runner.go:130] > 51391683
	I0703 23:39:08.049239   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:39:08.059769   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:39:08.071828   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076593   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076621   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.076671   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:39:08.086595   45138 command_runner.go:130] > 3ec20f2e
	I0703 23:39:08.086685   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:39:08.099207   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:39:08.113422   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118408   45138 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118575   45138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.118638   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:39:08.124796   45138 command_runner.go:130] > b5213941
	I0703 23:39:08.124909   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:39:08.139045   45138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:39:08.144571   45138 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:39:08.144605   45138 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0703 23:39:08.144616   45138 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0703 23:39:08.144626   45138 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0703 23:39:08.144639   45138 command_runner.go:130] > Access: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144647   45138 command_runner.go:130] > Modify: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144656   45138 command_runner.go:130] > Change: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144665   45138 command_runner.go:130] >  Birth: 2024-07-03 23:32:57.903236619 +0000
	I0703 23:39:08.144762   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 23:39:08.151314   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.151376   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 23:39:08.157569   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.157796   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 23:39:08.164119   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.164208   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 23:39:08.170821   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.170898   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 23:39:08.177846   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.177940   45138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 23:39:08.184167   45138 command_runner.go:130] > Certificate will not expire
	I0703 23:39:08.184341   45138 kubeadm.go:391] StartCluster: {Name:multinode-184661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-184661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:39:08.184444   45138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:39:08.184516   45138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:39:08.223325   45138 command_runner.go:130] > b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021
	I0703 23:39:08.223355   45138 command_runner.go:130] > 9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e
	I0703 23:39:08.223364   45138 command_runner.go:130] > f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8
	I0703 23:39:08.223378   45138 command_runner.go:130] > 10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b
	I0703 23:39:08.223386   45138 command_runner.go:130] > d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824
	I0703 23:39:08.223395   45138 command_runner.go:130] > ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95
	I0703 23:39:08.223402   45138 command_runner.go:130] > a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831
	I0703 23:39:08.223435   45138 command_runner.go:130] > ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e
	I0703 23:39:08.225057   45138 cri.go:89] found id: "b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021"
	I0703 23:39:08.225078   45138 cri.go:89] found id: "9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e"
	I0703 23:39:08.225083   45138 cri.go:89] found id: "f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8"
	I0703 23:39:08.225091   45138 cri.go:89] found id: "10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b"
	I0703 23:39:08.225094   45138 cri.go:89] found id: "d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824"
	I0703 23:39:08.225097   45138 cri.go:89] found id: "ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95"
	I0703 23:39:08.225099   45138 cri.go:89] found id: "a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831"
	I0703 23:39:08.225102   45138 cri.go:89] found id: "ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e"
	I0703 23:39:08.225104   45138 cri.go:89] found id: ""
	I0703 23:39:08.225150   45138 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.718691099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050181718664967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5710c66-e96d-4699-a494-982b8737548b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.719269205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45a8c724-fd1a-4371-8178-719af8cb866b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.719347970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45a8c724-fd1a-4371-8178-719af8cb866b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.719750587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45a8c724-fd1a-4371-8178-719af8cb866b name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.763892155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c51c4120-4c29-4820-8cef-fb558fde9e1b name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.764004927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c51c4120-4c29-4820-8cef-fb558fde9e1b name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.765156220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32b1a16f-d1f9-4f73-8a1c-8dc177a982c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.765601412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050181765578624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32b1a16f-d1f9-4f73-8a1c-8dc177a982c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.766547994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9773ca7a-8a0c-4223-baba-34017d5874dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.766620221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9773ca7a-8a0c-4223-baba-34017d5874dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.767417666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9773ca7a-8a0c-4223-baba-34017d5874dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.810684306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5105ef41-b9c9-4e05-9f09-3aa7cbf51d57 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.810762611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5105ef41-b9c9-4e05-9f09-3aa7cbf51d57 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.812393080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2886ebba-5067-4419-ab26-ff210a38b7f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.813003338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050181812973902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2886ebba-5067-4419-ab26-ff210a38b7f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.813494920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fadbd7e-b9c0-4c6f-b127-24f86b045ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.813626470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fadbd7e-b9c0-4c6f-b127-24f86b045ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.814041010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fadbd7e-b9c0-4c6f-b127-24f86b045ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.861767194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=968bae85-e1eb-463e-b2c7-1cfed6465b37 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.861917966Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=968bae85-e1eb-463e-b2c7-1cfed6465b37 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.862888799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d89826ed-87f3-4e09-bc7b-c191bc34279a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.863283658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720050181863261814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d89826ed-87f3-4e09-bc7b-c191bc34279a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.863902761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5584516-f89c-4ab0-b30b-c3efc6256cc1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.863993578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5584516-f89c-4ab0-b30b-c3efc6256cc1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:43:01 multinode-184661 crio[2855]: time="2024-07-03 23:43:01.864437719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec4970b0c834a299fd3608d292d7133a223d3e559dcfa4497f702663fe089f6d,PodSandboxId:19c8adc51ae2583daa35a6e1cfcb4d16a3f3de0bab8a0e9d754022d7d20623a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720049989286136609,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926,PodSandboxId:d69190efd95390c916acc2e31cc816459cef1c3183a2e8ab61f58e55a2103e7f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720049955841493318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2,PodSandboxId:0045d033faf88b35afd755bddb4b12e8767c7d7c95bdf01a76ef44ed0b492417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720049955635217913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0,PodSandboxId:1eb4f441185dd8a934b464553d077711504a1ce02d4df97d35edb32b99d04c51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720049955579119350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-6c0ed215c9dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1364de120bc0e1595b878575334a5ee6ad2a340b08397156b5ea8224613b657,PodSandboxId:ddcd3155983f6a24f6c50fa27bb93eaca4b6396df51584383d4698d48354f158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720049955508285971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a,PodSandboxId:075e262c83d198c4a40caf9960a5009020b7937b55679a30e54267eaad4f4a79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720049951721995438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e,PodSandboxId:ecb6cd672ae438c90842b7e07cf16df1364c56faa8db5eca216798fc7fec8ce5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720049951747228834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be,PodSandboxId:ceadb6ca914a6d9820089a935f90665ef30e13a398d8cb1a07eb3e39fff922bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720049951621615351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1,PodSandboxId:688ba1e228cc832bb87337ed524dd815b76418f0a259a256fb2d8a752fd16ab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720049951631748130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.container.hash: e06935f2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6f1745f72017b7ea3c96bf0adee09adf509561be3b17cfd5c39bce5084d220,PodSandboxId:0399ae962937e974c8ab21ff30b0d620c5d488489d33cd11265b554bf97b5553,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720049646313534518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vxz7l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bdcc46c-cbaa-4168-9496-4b9b393dc05d,},Annotations:map[string]string{io.kubernetes.container.hash: 2c1cbbc4,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d9ad30a6abffde41231e7691fdff756a06e9b01043d391c45106aad145e021,PodSandboxId:bfef010a94ec56edd9ff1e07e4e495b6cec8f3422a2ade57b109bfa7a18e322c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720049603999317678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08127733-cc97-4e47-b45f-623a612229c3,},Annotations:map[string]string{io.kubernetes.container.hash: 814fe246,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e,PodSandboxId:d0f2c0c960b988f106e80eee9d7ddb3765f43c3716bea084ae299dadf5370207,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720049603419456445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq58d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420f9d81-e376-4e42-b8e6-7c5d783a5c6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c79470c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8,PodSandboxId:d62314963eab873c101f1e84859396a6d975f1d2e74581c63e26321c8d290d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720049601583183915,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p8ckf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28591303-b860-4d2b-9c34-3fb77062ec2d,},Annotations:map[string]string{io.kubernetes.container.hash: 26ffab31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b,PodSandboxId:f7aa32a824ac9218bc628d4b0321e59620011e79df123e4c01e7ef789b1bbaac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720049601156072183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppwdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eda932b-d2db-481c-894d-
6c0ed215c9dd,},Annotations:map[string]string{io.kubernetes.container.hash: 219e8f9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824,PodSandboxId:783603acdf170f727c1f9d9168781d57befa39910fa645bf75425c016988c8b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720049581741963458,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e50192c0163f56a4a82944eb5bdd1d,},Annotations:map[string]string{
io.kubernetes.container.hash: 3c932a75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95,PodSandboxId:94d8a77051e1b7ab63b093029c110b1dd33eafbd07203eb6e74033e77a497dbc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720049581707130260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f32a704e81145abf5d847077ae0cd5da,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831,PodSandboxId:065c6eec4a4ef335822d1a0f8d0b0e706d8b27b521425f5a88b2f4d892137da3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720049581702002675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a5dc959ce82b296054e2b82f0b2beb,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e,PodSandboxId:7dfc9760670b784e1b3401cfac7aa3ae230898ca96b0aa2371e370926f1981cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720049581674046557,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-184661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b1ed977e33b7e3f10504ee9c3ffc67c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e06935f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5584516-f89c-4ab0-b30b-c3efc6256cc1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec4970b0c834a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   19c8adc51ae25       busybox-fc5497c4f-vxz7l
	891161ab87387       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   d69190efd9539       kindnet-p8ckf
	2201946bdde9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   0045d033faf88       coredns-7db6d8ff4d-cq58d
	06bcfe51368b9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   1eb4f441185dd       kube-proxy-ppwdr
	f1364de120bc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   ddcd3155983f6       storage-provisioner
	fcebe5779fdd2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   ecb6cd672ae43       etcd-multinode-184661
	26bedd6174dc7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      3 minutes ago       Running             kube-scheduler            1                   075e262c83d19       kube-scheduler-multinode-184661
	c7439ac3eb623       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      3 minutes ago       Running             kube-apiserver            1                   688ba1e228cc8       kube-apiserver-multinode-184661
	daf43bf6dcaea       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      3 minutes ago       Running             kube-controller-manager   1                   ceadb6ca914a6       kube-controller-manager-multinode-184661
	8b6f1745f7201       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   0399ae962937e       busybox-fc5497c4f-vxz7l
	b6d9ad30a6abf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   bfef010a94ec5       storage-provisioner
	9a2185302f0a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   d0f2c0c960b98       coredns-7db6d8ff4d-cq58d
	f90a18a533fad       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      9 minutes ago       Exited              kindnet-cni               0                   d62314963eab8       kindnet-p8ckf
	10e806db9236b       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Exited              kube-proxy                0                   f7aa32a824ac9       kube-proxy-ppwdr
	d2ab9ac7bf2fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   783603acdf170       etcd-multinode-184661
	ee8579204aa24       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      10 minutes ago      Exited              kube-controller-manager   0                   94d8a77051e1b       kube-controller-manager-multinode-184661
	a0183156fe771       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      10 minutes ago      Exited              kube-scheduler            0                   065c6eec4a4ef       kube-scheduler-multinode-184661
	ee987852f4c38       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      10 minutes ago      Exited              kube-apiserver            0                   7dfc9760670b7       kube-apiserver-multinode-184661
	
	
	==> coredns [2201946bdde9c4f2f58055f3a3a3fa8ad6ea7a035046f44fd2b90b46df3c41c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48531 - 45637 "HINFO IN 9048751508592838614.4917136754263494149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010150224s
	
	
	==> coredns [9a2185302f0a5e1f651f48b4fce4e2eb903c01ea011e501d316fa2d2e9ee8e7e] <==
	[INFO] 10.244.0.3:49691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652887s
	[INFO] 10.244.0.3:54574 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075872s
	[INFO] 10.244.0.3:59082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000052396s
	[INFO] 10.244.0.3:42277 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001192524s
	[INFO] 10.244.0.3:39993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059107s
	[INFO] 10.244.0.3:44327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080517s
	[INFO] 10.244.0.3:59790 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059188s
	[INFO] 10.244.1.2:32863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118916s
	[INFO] 10.244.1.2:57228 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091602s
	[INFO] 10.244.1.2:56855 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087559s
	[INFO] 10.244.1.2:36669 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097908s
	[INFO] 10.244.0.3:49612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169473s
	[INFO] 10.244.0.3:46927 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000741s
	[INFO] 10.244.0.3:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060157s
	[INFO] 10.244.0.3:44079 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099302s
	[INFO] 10.244.1.2:38153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195374s
	[INFO] 10.244.1.2:33161 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122803s
	[INFO] 10.244.1.2:54373 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097172s
	[INFO] 10.244.1.2:42054 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000248363s
	[INFO] 10.244.0.3:49980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140039s
	[INFO] 10.244.0.3:51889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074447s
	[INFO] 10.244.0.3:40815 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082516s
	[INFO] 10.244.0.3:50422 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006287s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-184661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-184661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=multinode-184661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_33_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:33:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-184661
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:39:14 +0000   Wed, 03 Jul 2024 23:33:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    multinode-184661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4bbb9aa92f246c490df3bdc3e5ca646
	  System UUID:                a4bbb9aa-92f2-46c4-90df-3bdc3e5ca646
	  Boot ID:                    42f5dd8c-0341-4ae6-8329-019bbb2ea5a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vxz7l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 coredns-7db6d8ff4d-cq58d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m42s
	  kube-system                 etcd-multinode-184661                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m55s
	  kube-system                 kindnet-p8ckf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m42s
	  kube-system                 kube-apiserver-multinode-184661             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-controller-manager-multinode-184661    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-proxy-ppwdr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-scheduler-multinode-184661             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m56s                  kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m56s                  kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s                  kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m43s                  node-controller  Node multinode-184661 event: Registered Node multinode-184661 in Controller
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-184661 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-184661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-184661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-184661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node multinode-184661 event: Registered Node multinode-184661 in Controller
	
	
	Name:               multinode-184661-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-184661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=multinode-184661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_03T23_39_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:39:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-184661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:40:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:41:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:41:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:41:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 03 Jul 2024 23:40:24 +0000   Wed, 03 Jul 2024 23:41:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    multinode-184661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 597cfdbb4d4244ff86d4a496dc6c1d59
	  System UUID:                597cfdbb-4d42-44ff-86d4-a496dc6c1d59
	  Boot ID:                    5af469cb-7eaf-4a81-b705-0076bd404a30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fgmqc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kindnet-k29rj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m11s
	  kube-system                 kube-proxy-jqxqn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m4s                   kube-proxy       
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m11s (x2 over 9m11s)  kubelet          Node multinode-184661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m11s (x2 over 9m11s)  kubelet          Node multinode-184661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m11s (x2 over 9m11s)  kubelet          Node multinode-184661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m2s                   kubelet          Node multinode-184661-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)    kubelet          Node multinode-184661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)    kubelet          Node multinode-184661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)    kubelet          Node multinode-184661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m                     kubelet          Node multinode-184661-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-184661-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.078031] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.168008] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147281] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.295824] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.278232] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.061879] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.610496] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[Jul 3 23:33] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.615710] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.069084] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.152291] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.156219] kauditd_printk_skb: 21 callbacks suppressed
	[Jul 3 23:34] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 3 23:39] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +0.156275] systemd-fstab-generator[2779]: Ignoring "noauto" option for root device
	[  +0.206199] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.143771] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.288969] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +2.034742] systemd-fstab-generator[2938]: Ignoring "noauto" option for root device
	[  +3.150901] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +0.084841] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.011637] kauditd_printk_skb: 82 callbacks suppressed
	[ +11.175458] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.834145] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[ +20.366696] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [d2ab9ac7bf2fab170766c6ebd2e09978a01e95b5439fb5a10321dd0bcfbeb824] <==
	{"level":"info","ts":"2024-07-03T23:33:56.494933Z","caller":"traceutil/trace.go:171","msg":"trace[676682260] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"213.555707ms","start":"2024-07-03T23:33:56.28137Z","end":"2024-07-03T23:33:56.494926Z","steps":["trace[676682260] 'process raft request'  (duration: 213.184664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:33:56.495098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.614757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-03T23:33:56.495205Z","caller":"traceutil/trace.go:171","msg":"trace[1948048317] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:514; }","duration":"155.746396ms","start":"2024-07-03T23:33:56.339446Z","end":"2024-07-03T23:33:56.495192Z","steps":["trace[1948048317] 'agreement among raft nodes before linearized reading'  (duration: 155.511754ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:33:56.495108Z","caller":"traceutil/trace.go:171","msg":"trace[313011572] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"195.670851ms","start":"2024-07-03T23:33:56.29943Z","end":"2024-07-03T23:33:56.495101Z","steps":["trace[313011572] 'process raft request'  (duration: 195.300959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:33:56.786881Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.027427ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17815554558663876303 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:506 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:33:56.787738Z","caller":"traceutil/trace.go:171","msg":"trace[1896032271] linearizableReadLoop","detail":"{readStateIndex:534; appliedIndex:533; }","duration":"159.076729ms","start":"2024-07-03T23:33:56.628648Z","end":"2024-07-03T23:33:56.787725Z","steps":["trace[1896032271] 'read index received'  (duration: 32.935159ms)","trace[1896032271] 'applied index is now lower than readState.Index'  (duration: 126.139789ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:33:56.787834Z","caller":"traceutil/trace.go:171","msg":"trace[325940741] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"284.460847ms","start":"2024-07-03T23:33:56.503314Z","end":"2024-07-03T23:33:56.787774Z","steps":["trace[325940741] 'process raft request'  (duration: 158.302655ms)","trace[325940741] 'compare'  (duration: 124.433085ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:33:56.787909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.265585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-184661-m02\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-03T23:33:56.787955Z","caller":"traceutil/trace.go:171","msg":"trace[1356973007] range","detail":"{range_begin:/registry/minions/multinode-184661-m02; range_end:; response_count:1; response_revision:515; }","duration":"159.340832ms","start":"2024-07-03T23:33:56.628608Z","end":"2024-07-03T23:33:56.787949Z","steps":["trace[1356973007] 'agreement among raft nodes before linearized reading'  (duration: 159.251464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:34:38.537641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.249954ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17815554558663876616 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-184661-m03.17ded811b4da1088\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-184661-m03.17ded811b4da1088\" value_size:646 lease:8592182521809100596 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:34:38.538223Z","caller":"traceutil/trace.go:171","msg":"trace[749894674] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"204.227552ms","start":"2024-07-03T23:34:38.333975Z","end":"2024-07-03T23:34:38.538202Z","steps":["trace[749894674] 'read index received'  (duration: 82.305208ms)","trace[749894674] 'applied index is now lower than readState.Index'  (duration: 121.921063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:34:38.538404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.414878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-184661-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-03T23:34:38.538449Z","caller":"traceutil/trace.go:171","msg":"trace[1481507871] range","detail":"{range_begin:/registry/minions/multinode-184661-m03; range_end:; response_count:0; response_revision:603; }","duration":"204.492999ms","start":"2024-07-03T23:34:38.33395Z","end":"2024-07-03T23:34:38.538443Z","steps":["trace[1481507871] 'agreement among raft nodes before linearized reading'  (duration: 204.382393ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:34:38.538278Z","caller":"traceutil/trace.go:171","msg":"trace[519313792] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"256.793341ms","start":"2024-07-03T23:34:38.281462Z","end":"2024-07-03T23:34:38.538255Z","steps":["trace[519313792] 'process raft request'  (duration: 134.863562ms)","trace[519313792] 'compare'  (duration: 121.089035ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:34:38.541628Z","caller":"traceutil/trace.go:171","msg":"trace[1705781359] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"187.845623ms","start":"2024-07-03T23:34:38.353769Z","end":"2024-07-03T23:34:38.541614Z","steps":["trace[1705781359] 'process raft request'  (duration: 187.499862ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:37:33.516842Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-03T23:37:33.516986Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-184661","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-07-03T23:37:33.517112Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.517257Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.59746Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:37:33.597496Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-03T23:37:33.598922Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-07-03T23:37:33.60137Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:37:33.60149Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:37:33.601499Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-184661","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> etcd [fcebe5779fdd23d4747cb284baa2ed6ca455d61ea7f612fca0c321bdf877804e] <==
	{"level":"info","ts":"2024-07-03T23:39:12.276683Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-03T23:39:12.278282Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"79ee2fa200dbf73d","initial-advertise-peer-urls":["https://192.168.39.57:2380"],"listen-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-03T23:39:12.280853Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-03T23:39:12.276722Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:39:12.28101Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-03T23:39:12.282002Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"79ee2fa200dbf73d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282098Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.283027Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T23:39:12.282392Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T23:39:12.283131Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T23:39:13.210407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.210528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.210697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-07-03T23:39:13.21078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.21087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.210909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.211006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-03T23:39:13.216897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-184661 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T23:39:13.218845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:39:13.219351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:39:13.224294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-07-03T23:39:13.224397Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T23:39:13.225941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T23:39:13.225756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:43:02 up 10 min,  0 users,  load average: 0.19, 0.21, 0.11
	Linux multinode-184661 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [891161ab87387b8d3a8d3cc6ea9038853d506de76ea361ccd02b75b2a8926926] <==
	I0703 23:41:56.863342       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:06.876676       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:06.876719       1 main.go:227] handling current node
	I0703 23:42:06.876735       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:06.876740       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:16.882618       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:16.882679       1 main.go:227] handling current node
	I0703 23:42:16.882707       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:16.882712       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:26.894903       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:26.894945       1 main.go:227] handling current node
	I0703 23:42:26.894958       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:26.894964       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:36.899496       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:36.899532       1 main.go:227] handling current node
	I0703 23:42:36.899543       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:36.899548       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:46.911433       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:46.911472       1 main.go:227] handling current node
	I0703 23:42:46.911482       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:46.911487       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:42:56.916271       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:42:56.916369       1 main.go:227] handling current node
	I0703 23:42:56.916403       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:42:56.916422       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f90a18a533fad33f1399c7340fb4993be137dec52290c4c684b07492780a7fb8] <==
	I0703 23:36:52.735205       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:02.747747       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:02.747855       1 main.go:227] handling current node
	I0703 23:37:02.747868       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:02.747873       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:02.748131       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:02.748166       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:12.760601       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:12.760649       1 main.go:227] handling current node
	I0703 23:37:12.760660       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:12.760665       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:12.760767       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:12.760919       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:22.765462       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:22.825920       1 main.go:227] handling current node
	I0703 23:37:22.826056       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:22.826084       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:22.826231       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:22.826255       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	I0703 23:37:32.839726       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0703 23:37:32.839751       1 main.go:227] handling current node
	I0703 23:37:32.839762       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0703 23:37:32.839766       1 main.go:250] Node multinode-184661-m02 has CIDR [10.244.1.0/24] 
	I0703 23:37:32.840081       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0703 23:37:32.840091       1 main.go:250] Node multinode-184661-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c7439ac3eb6239673de5aae5f7d5e6a3b8fdf51932e38d531f3b298ba52eeed1] <==
	E0703 23:39:14.678417       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0703 23:39:14.701134       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:39:14.701285       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:39:14.701320       1 policy_source.go:224] refreshing policies
	I0703 23:39:14.742597       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:39:14.742912       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:39:14.743016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 23:39:14.747981       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0703 23:39:14.749241       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:39:14.749274       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:39:14.749288       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:39:14.749295       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:39:14.749302       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:39:14.751192       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:39:14.751774       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 23:39:14.751873       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 23:39:14.752844       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 23:39:15.587942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 23:39:17.038271       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:39:17.181215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0703 23:39:17.201154       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:39:17.276199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 23:39:17.290459       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 23:39:26.982566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0703 23:39:27.233133       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ee987852f4c38991f05b95124ddc7975bad40e1c917593d5486b9df017fc399e] <==
	W0703 23:37:33.539294       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551039       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551195       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551257       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551312       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551369       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551404       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551484       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551536       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551590       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551643       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551705       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.551756       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.553107       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554015       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554081       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554136       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554203       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0703 23:37:33.554261       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0703 23:37:33.554670       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555023       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555209       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555359       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555542       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0703 23:37:33.555765       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [daf43bf6dcaeaabe4341a0ac1083631041bb05c2e480dd6ff1c7a07084a592be] <==
	I0703 23:39:53.685393       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m02\" does not exist"
	I0703 23:39:53.699765       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m02" podCIDRs=["10.244.1.0/24"]
	I0703 23:39:55.576281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.638µs"
	I0703 23:39:55.585561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.434µs"
	I0703 23:39:55.634130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.868µs"
	I0703 23:39:55.642936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.632µs"
	I0703 23:39:55.647871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.057µs"
	I0703 23:39:57.862828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="584.811µs"
	I0703 23:40:02.712077       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:02.740982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.662µs"
	I0703 23:40:02.774576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.194µs"
	I0703 23:40:06.777183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.292007ms"
	I0703 23:40:06.777313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.95µs"
	I0703 23:40:21.486534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:22.600287       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:40:22.600648       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:22.623866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:40:31.710341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:40:37.279286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:41:17.205407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.849441ms"
	I0703 23:41:17.205507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.051µs"
	I0703 23:41:47.039026       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z9csj"
	I0703 23:41:47.066378       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z9csj"
	I0703 23:41:47.066465       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hcctk"
	I0703 23:41:47.087190       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hcctk"
	
	
	==> kube-controller-manager [ee8579204aa24c1177675a1819ee14b490f0763cc4e5b53882a54efffaca1f95] <==
	I0703 23:33:51.392433       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m02\" does not exist"
	I0703 23:33:51.411928       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m02" podCIDRs=["10.244.1.0/24"]
	I0703 23:33:54.824467       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-184661-m02"
	I0703 23:34:00.917058       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:34:03.242916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.776925ms"
	I0703 23:34:03.276086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.773954ms"
	I0703 23:34:03.296986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.83842ms"
	I0703 23:34:03.297342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="107.568µs"
	I0703 23:34:07.212622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.037004ms"
	I0703 23:34:07.220373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.039352ms"
	I0703 23:34:07.220511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.588µs"
	I0703 23:34:07.220574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.448µs"
	I0703 23:34:38.546673       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:34:38.547394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:34:38.556463       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.2.0/24"]
	I0703 23:34:39.850667       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-184661-m03"
	I0703 23:34:49.089289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:18.344650       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:19.530660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:35:19.531278       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-184661-m03\" does not exist"
	I0703 23:35:19.554619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-184661-m03" podCIDRs=["10.244.3.0/24"]
	I0703 23:35:27.574039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m02"
	I0703 23:36:04.904321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-184661-m03"
	I0703 23:36:04.960697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.489012ms"
	I0703 23:36:04.960838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.437µs"
	
	
	==> kube-proxy [06bcfe51368b9bd5dfe4e8fe3d5355dc1bd74ed4738d47ad8abc52817d1e67f0] <==
	I0703 23:39:15.967689       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:39:15.985715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0703 23:39:16.082747       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:39:16.082902       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:39:16.082920       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:39:16.088563       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:39:16.088849       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:39:16.088879       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:39:16.091534       1 config.go:192] "Starting service config controller"
	I0703 23:39:16.091565       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:39:16.091594       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:39:16.091598       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:39:16.092512       1 config.go:319] "Starting node config controller"
	I0703 23:39:16.092545       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:39:16.192438       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:39:16.192508       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:39:16.192767       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [10e806db9236b2467c48e7dd21dd18ba8d0c3c7ae5e49d70d9d2c66e626ab40b] <==
	I0703 23:33:21.607835       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:33:21.621356       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0703 23:33:21.694959       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:33:21.695023       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:33:21.695041       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:33:21.698693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:33:21.698983       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:33:21.699017       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:33:21.700220       1 config.go:192] "Starting service config controller"
	I0703 23:33:21.700234       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:33:21.700256       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:33:21.700260       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:33:21.702638       1 config.go:319] "Starting node config controller"
	I0703 23:33:21.702652       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:33:21.801988       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:33:21.804423       1 shared_informer.go:320] Caches are synced for node config
	I0703 23:33:21.802023       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [26bedd6174dc75e02d07747d7da18b0c7d14dc35e8dc50d05ebbb394800a588a] <==
	I0703 23:39:12.901514       1 serving.go:380] Generated self-signed cert in-memory
	W0703 23:39:14.654550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 23:39:14.654642       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:39:14.654708       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 23:39:14.654716       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 23:39:14.679450       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:39:14.679498       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:39:14.681756       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:39:14.682003       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:39:14.682032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:39:14.682052       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:39:14.786953       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a0183156fe7718a9c5da5d808f8acf30a08adb55a0a6e970038a1ed3425ff831] <==
	E0703 23:33:04.399878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:33:04.399884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 23:33:04.399891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:33:04.399898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0703 23:33:04.400198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 23:33:04.399200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 23:33:04.401190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 23:33:05.238774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0703 23:33:05.238898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0703 23:33:05.282087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0703 23:33:05.282182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0703 23:33:05.448944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 23:33:05.449048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 23:33:05.625182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0703 23:33:05.625280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0703 23:33:05.639972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0703 23:33:05.640174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0703 23:33:05.755142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 23:33:05.756232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 23:33:05.760358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 23:33:05.760468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 23:33:05.956066       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 23:33:05.956275       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0703 23:33:09.177922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:37:33.514438       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949406    3070 topology_manager.go:215] "Topology Admit Handler" podUID="08127733-cc97-4e47-b45f-623a612229c3" podNamespace="kube-system" podName="storage-provisioner"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.949466    3070 topology_manager.go:215] "Topology Admit Handler" podUID="4bdcc46c-cbaa-4168-9496-4b9b393dc05d" podNamespace="default" podName="busybox-fc5497c4f-vxz7l"
	Jul 03 23:39:14 multinode-184661 kubelet[3070]: I0703 23:39:14.973609    3070 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.041656    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eda932b-d2db-481c-894d-6c0ed215c9dd-xtables-lock\") pod \"kube-proxy-ppwdr\" (UID: \"3eda932b-d2db-481c-894d-6c0ed215c9dd\") " pod="kube-system/kube-proxy-ppwdr"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042166    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eda932b-d2db-481c-894d-6c0ed215c9dd-lib-modules\") pod \"kube-proxy-ppwdr\" (UID: \"3eda932b-d2db-481c-894d-6c0ed215c9dd\") " pod="kube-system/kube-proxy-ppwdr"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042298    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-xtables-lock\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042701    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08127733-cc97-4e47-b45f-623a612229c3-tmp\") pod \"storage-provisioner\" (UID: \"08127733-cc97-4e47-b45f-623a612229c3\") " pod="kube-system/storage-provisioner"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042865    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-cni-cfg\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:15 multinode-184661 kubelet[3070]: I0703 23:39:15.042931    3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28591303-b860-4d2b-9c34-3fb77062ec2d-lib-modules\") pod \"kindnet-p8ckf\" (UID: \"28591303-b860-4d2b-9c34-3fb77062ec2d\") " pod="kube-system/kindnet-p8ckf"
	Jul 03 23:39:18 multinode-184661 kubelet[3070]: I0703 23:39:18.998201    3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 03 23:40:11 multinode-184661 kubelet[3070]: E0703 23:40:11.030226    3070 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:40:11 multinode-184661 kubelet[3070]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:41:11 multinode-184661 kubelet[3070]: E0703 23:41:11.028579    3070 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:41:11 multinode-184661 kubelet[3070]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:41:11 multinode-184661 kubelet[3070]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:41:11 multinode-184661 kubelet[3070]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:41:11 multinode-184661 kubelet[3070]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 23:42:11 multinode-184661 kubelet[3070]: E0703 23:42:11.035724    3070 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 23:42:11 multinode-184661 kubelet[3070]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 23:42:11 multinode-184661 kubelet[3070]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 23:42:11 multinode-184661 kubelet[3070]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 23:42:11 multinode-184661 kubelet[3070]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:43:01.435521   47064 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18998-9396/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-184661 -n multinode-184661
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-184661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.90s)

                                                
                                    
x
+
TestPreload (331.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-854380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0703 23:48:57.357874   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-854380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m9.050916226s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-854380 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-854380 image pull gcr.io/k8s-minikube/busybox: (2.876613339s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-854380
E0703 23:51:00.101907   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:51:17.055523   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-854380: exit status 82 (2m0.467720649s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-854380"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-854380 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-03 23:51:59.340520963 +0000 UTC m=+3912.273758717
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-854380 -n test-preload-854380
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-854380 -n test-preload-854380: exit status 3 (18.515290697s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:52:17.852256   50170 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host
	E0703 23:52:17.852278   50170 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-854380" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-854380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-854380
--- FAIL: TestPreload (331.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (439.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m34.414959155s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-652205" primary control-plane node in "kubernetes-upgrade-652205" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:54:13.184936   51264 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:54:13.185075   51264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:54:13.185085   51264 out.go:304] Setting ErrFile to fd 2...
	I0703 23:54:13.185090   51264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:54:13.185374   51264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:54:13.186112   51264 out.go:298] Setting JSON to false
	I0703 23:54:13.187183   51264 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5793,"bootTime":1720045060,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:54:13.187267   51264 start.go:139] virtualization: kvm guest
	I0703 23:54:13.189790   51264 out.go:177] * [kubernetes-upgrade-652205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:54:13.192382   51264 notify.go:220] Checking for updates...
	I0703 23:54:13.193197   51264 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:54:13.196149   51264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:54:13.197783   51264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:54:13.201058   51264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:54:13.203744   51264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:54:13.205181   51264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:54:13.207054   51264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:54:13.244084   51264 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:54:13.245430   51264 start.go:297] selected driver: kvm2
	I0703 23:54:13.245442   51264 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:54:13.245453   51264 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:54:13.246142   51264 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:54:13.246239   51264 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:54:13.262580   51264 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:54:13.262653   51264 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:54:13.262878   51264 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 23:54:13.262906   51264 cni.go:84] Creating CNI manager for ""
	I0703 23:54:13.262914   51264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:54:13.262925   51264 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 23:54:13.263007   51264 start.go:340] cluster config:
	{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:54:13.263131   51264 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:54:13.264809   51264 out.go:177] * Starting "kubernetes-upgrade-652205" primary control-plane node in "kubernetes-upgrade-652205" cluster
	I0703 23:54:13.266146   51264 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 23:54:13.266179   51264 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0703 23:54:13.266195   51264 cache.go:56] Caching tarball of preloaded images
	I0703 23:54:13.266282   51264 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:54:13.266295   51264 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0703 23:54:13.266726   51264 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/config.json ...
	I0703 23:54:13.266754   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/config.json: {Name:mk083c036f59f4b02afc4fc090d5369f1524a331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:13.266907   51264 start.go:360] acquireMachinesLock for kubernetes-upgrade-652205: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:54:13.266951   51264 start.go:364] duration metric: took 24.819µs to acquireMachinesLock for "kubernetes-upgrade-652205"
	I0703 23:54:13.266973   51264 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:54:13.267042   51264 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:54:13.268585   51264 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0703 23:54:13.268721   51264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:54:13.268761   51264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:54:13.285111   51264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0703 23:54:13.285537   51264 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:54:13.286167   51264 main.go:141] libmachine: Using API Version  1
	I0703 23:54:13.286192   51264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:54:13.286568   51264 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:54:13.286818   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0703 23:54:13.286982   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:13.287173   51264 start.go:159] libmachine.API.Create for "kubernetes-upgrade-652205" (driver="kvm2")
	I0703 23:54:13.287204   51264 client.go:168] LocalClient.Create starting
	I0703 23:54:13.287238   51264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:54:13.287294   51264 main.go:141] libmachine: Decoding PEM data...
	I0703 23:54:13.287316   51264 main.go:141] libmachine: Parsing certificate...
	I0703 23:54:13.287388   51264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:54:13.287414   51264 main.go:141] libmachine: Decoding PEM data...
	I0703 23:54:13.287438   51264 main.go:141] libmachine: Parsing certificate...
	I0703 23:54:13.287464   51264 main.go:141] libmachine: Running pre-create checks...
	I0703 23:54:13.287483   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .PreCreateCheck
	I0703 23:54:13.287981   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetConfigRaw
	I0703 23:54:13.288402   51264 main.go:141] libmachine: Creating machine...
	I0703 23:54:13.288420   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .Create
	I0703 23:54:13.288545   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Creating KVM machine...
	I0703 23:54:13.289683   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found existing default KVM network
	I0703 23:54:13.290517   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:13.290357   51321 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001ad960}
	I0703 23:54:13.290570   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | created network xml: 
	I0703 23:54:13.290584   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | <network>
	I0703 23:54:13.290595   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   <name>mk-kubernetes-upgrade-652205</name>
	I0703 23:54:13.290610   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   <dns enable='no'/>
	I0703 23:54:13.290620   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   
	I0703 23:54:13.290631   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0703 23:54:13.290641   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |     <dhcp>
	I0703 23:54:13.290651   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0703 23:54:13.290661   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |     </dhcp>
	I0703 23:54:13.290677   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   </ip>
	I0703 23:54:13.290687   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG |   
	I0703 23:54:13.290699   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | </network>
	I0703 23:54:13.290714   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | 
	I0703 23:54:13.296054   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | trying to create private KVM network mk-kubernetes-upgrade-652205 192.168.39.0/24...
	I0703 23:54:13.368725   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | private KVM network mk-kubernetes-upgrade-652205 192.168.39.0/24 created
	I0703 23:54:13.368748   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205 ...
	I0703 23:54:13.368757   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:13.368702   51321 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:54:13.368790   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:54:13.368979   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:54:13.597883   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:13.597775   51321 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa...
	I0703 23:54:13.763635   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:13.763516   51321 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/kubernetes-upgrade-652205.rawdisk...
	I0703 23:54:13.763664   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Writing magic tar header
	I0703 23:54:13.763693   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Writing SSH key tar header
	I0703 23:54:13.763737   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:13.763652   51321 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205 ...
	I0703 23:54:13.763762   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205
	I0703 23:54:13.763795   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205 (perms=drwx------)
	I0703 23:54:13.763807   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:54:13.763814   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:54:13.763821   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:54:13.763831   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:54:13.763839   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:54:13.763847   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:54:13.763854   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Checking permissions on dir: /home
	I0703 23:54:13.763866   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Skipping /home - not owner
	I0703 23:54:13.763901   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:54:13.763916   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:54:13.763927   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:54:13.763939   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:54:13.763952   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Creating domain...
	I0703 23:54:13.764992   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) define libvirt domain using xml: 
	I0703 23:54:13.765017   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) <domain type='kvm'>
	I0703 23:54:13.765045   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <name>kubernetes-upgrade-652205</name>
	I0703 23:54:13.765071   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <memory unit='MiB'>2200</memory>
	I0703 23:54:13.765079   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <vcpu>2</vcpu>
	I0703 23:54:13.765085   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <features>
	I0703 23:54:13.765091   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <acpi/>
	I0703 23:54:13.765096   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <apic/>
	I0703 23:54:13.765101   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <pae/>
	I0703 23:54:13.765107   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     
	I0703 23:54:13.765114   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   </features>
	I0703 23:54:13.765119   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <cpu mode='host-passthrough'>
	I0703 23:54:13.765125   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   
	I0703 23:54:13.765129   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   </cpu>
	I0703 23:54:13.765138   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <os>
	I0703 23:54:13.765143   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <type>hvm</type>
	I0703 23:54:13.765148   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <boot dev='cdrom'/>
	I0703 23:54:13.765161   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <boot dev='hd'/>
	I0703 23:54:13.765169   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <bootmenu enable='no'/>
	I0703 23:54:13.765174   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   </os>
	I0703 23:54:13.765193   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   <devices>
	I0703 23:54:13.765203   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <disk type='file' device='cdrom'>
	I0703 23:54:13.765212   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/boot2docker.iso'/>
	I0703 23:54:13.765220   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <target dev='hdc' bus='scsi'/>
	I0703 23:54:13.765227   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <readonly/>
	I0703 23:54:13.765232   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </disk>
	I0703 23:54:13.765244   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <disk type='file' device='disk'>
	I0703 23:54:13.765253   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:54:13.765297   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/kubernetes-upgrade-652205.rawdisk'/>
	I0703 23:54:13.765324   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <target dev='hda' bus='virtio'/>
	I0703 23:54:13.765337   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </disk>
	I0703 23:54:13.765345   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <interface type='network'>
	I0703 23:54:13.765358   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <source network='mk-kubernetes-upgrade-652205'/>
	I0703 23:54:13.765369   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <model type='virtio'/>
	I0703 23:54:13.765378   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </interface>
	I0703 23:54:13.765387   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <interface type='network'>
	I0703 23:54:13.765404   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <source network='default'/>
	I0703 23:54:13.765416   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <model type='virtio'/>
	I0703 23:54:13.765426   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </interface>
	I0703 23:54:13.765434   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <serial type='pty'>
	I0703 23:54:13.765445   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <target port='0'/>
	I0703 23:54:13.765452   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </serial>
	I0703 23:54:13.765461   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <console type='pty'>
	I0703 23:54:13.765470   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <target type='serial' port='0'/>
	I0703 23:54:13.765479   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </console>
	I0703 23:54:13.765486   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     <rng model='virtio'>
	I0703 23:54:13.765493   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)       <backend model='random'>/dev/random</backend>
	I0703 23:54:13.765499   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     </rng>
	I0703 23:54:13.765517   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     
	I0703 23:54:13.765525   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)     
	I0703 23:54:13.765533   51264 main.go:141] libmachine: (kubernetes-upgrade-652205)   </devices>
	I0703 23:54:13.765540   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) </domain>
	I0703 23:54:13.765549   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) 
	I0703 23:54:13.769791   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:42:97:a4 in network default
	I0703 23:54:13.770307   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Ensuring networks are active...
	I0703 23:54:13.770319   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:13.770965   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Ensuring network default is active
	I0703 23:54:13.771233   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Ensuring network mk-kubernetes-upgrade-652205 is active
	I0703 23:54:13.771719   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Getting domain xml...
	I0703 23:54:13.772409   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Creating domain...
	I0703 23:54:15.009793   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Waiting to get IP...
	I0703 23:54:15.010707   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.011119   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.011144   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:15.011091   51321 retry.go:31] will retry after 272.655643ms: waiting for machine to come up
	I0703 23:54:15.285287   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.285718   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.285758   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:15.285678   51321 retry.go:31] will retry after 336.617058ms: waiting for machine to come up
	I0703 23:54:15.624376   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.624835   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:15.624878   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:15.624826   51321 retry.go:31] will retry after 468.07228ms: waiting for machine to come up
	I0703 23:54:16.094745   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:16.095339   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:16.095367   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:16.095297   51321 retry.go:31] will retry after 511.342154ms: waiting for machine to come up
	I0703 23:54:16.607887   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:16.608274   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:16.608302   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:16.608222   51321 retry.go:31] will retry after 493.384405ms: waiting for machine to come up
	I0703 23:54:17.102827   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:17.103255   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:17.103285   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:17.103195   51321 retry.go:31] will retry after 811.874425ms: waiting for machine to come up
	I0703 23:54:17.916412   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:17.916856   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:17.916893   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:17.916808   51321 retry.go:31] will retry after 1.180835371s: waiting for machine to come up
	I0703 23:54:19.099362   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:19.099797   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:19.099834   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:19.099767   51321 retry.go:31] will retry after 1.051639463s: waiting for machine to come up
	I0703 23:54:20.152914   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:20.153347   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:20.153374   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:20.153297   51321 retry.go:31] will retry after 1.477405377s: waiting for machine to come up
	I0703 23:54:21.632974   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:21.633430   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:21.633452   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:21.633379   51321 retry.go:31] will retry after 1.705863094s: waiting for machine to come up
	I0703 23:54:23.341235   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:23.341597   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:23.341623   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:23.341553   51321 retry.go:31] will retry after 2.61380974s: waiting for machine to come up
	I0703 23:54:25.957054   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:25.957449   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:25.957475   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:25.957416   51321 retry.go:31] will retry after 2.534862829s: waiting for machine to come up
	I0703 23:54:28.494329   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:28.494762   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:28.494790   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:28.494713   51321 retry.go:31] will retry after 2.966940248s: waiting for machine to come up
	I0703 23:54:31.464721   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:31.465142   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find current IP address of domain kubernetes-upgrade-652205 in network mk-kubernetes-upgrade-652205
	I0703 23:54:31.465171   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | I0703 23:54:31.465077   51321 retry.go:31] will retry after 4.204473181s: waiting for machine to come up
	I0703 23:54:35.670788   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:35.671258   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Found IP for machine: 192.168.39.204
	I0703 23:54:35.671282   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Reserving static IP address...
	I0703 23:54:35.671293   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has current primary IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:35.671629   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-652205", mac: "52:54:00:32:75:17", ip: "192.168.39.204"} in network mk-kubernetes-upgrade-652205
	I0703 23:54:35.754065   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Getting to WaitForSSH function...
	I0703 23:54:35.754089   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Reserved static IP address: 192.168.39.204
	I0703 23:54:35.754101   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Waiting for SSH to be available...
	I0703 23:54:35.757082   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:35.757409   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205
	I0703 23:54:35.757438   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-652205 interface with MAC address 52:54:00:32:75:17
	I0703 23:54:35.757590   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Using SSH client type: external
	I0703 23:54:35.757618   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa (-rw-------)
	I0703 23:54:35.757652   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:54:35.757673   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | About to run SSH command:
	I0703 23:54:35.757689   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | exit 0
	I0703 23:54:35.761748   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | SSH cmd err, output: exit status 255: 
	I0703 23:54:35.761770   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0703 23:54:35.761782   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | command : exit 0
	I0703 23:54:35.761794   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | err     : exit status 255
	I0703 23:54:35.761807   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | output  : 
	I0703 23:54:38.762613   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Getting to WaitForSSH function...
	I0703 23:54:38.765283   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:38.765669   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:38.765702   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:38.765782   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Using SSH client type: external
	I0703 23:54:38.765808   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa (-rw-------)
	I0703 23:54:38.765853   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:54:38.765867   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | About to run SSH command:
	I0703 23:54:38.765894   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | exit 0
	I0703 23:54:38.892228   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | SSH cmd err, output: <nil>: 
	I0703 23:54:38.892453   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) KVM machine creation complete!
	I0703 23:54:38.892758   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetConfigRaw
	I0703 23:54:38.893361   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:38.893549   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:38.893692   51264 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:54:38.893707   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetState
	I0703 23:54:38.894900   51264 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:54:38.894914   51264 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:54:38.894919   51264 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:54:38.894927   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:38.897248   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:38.897592   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:38.897620   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:38.897789   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:38.897950   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:38.898106   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:38.898226   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:38.898398   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:38.898636   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:38.898649   51264 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:54:39.003639   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:54:39.003663   51264 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:54:39.003671   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.006536   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.006890   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.006917   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.007071   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:39.007310   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.007511   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.007695   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:39.007868   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:39.008060   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:39.008074   51264 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:54:39.116770   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:54:39.116835   51264 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:54:39.116845   51264 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:54:39.116857   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0703 23:54:39.117126   51264 buildroot.go:166] provisioning hostname "kubernetes-upgrade-652205"
	I0703 23:54:39.117155   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0703 23:54:39.117351   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.119515   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.119833   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.119887   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.120013   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:39.120166   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.120351   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.120457   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:39.120622   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:39.120801   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:39.120817   51264 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-652205 && echo "kubernetes-upgrade-652205" | sudo tee /etc/hostname
	I0703 23:54:39.239158   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-652205
	
	I0703 23:54:39.239188   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.241793   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.242171   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.242204   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.242332   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:39.242546   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.242678   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.242797   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:39.242918   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:39.243117   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:39.243143   51264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-652205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-652205/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-652205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:54:39.357449   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:54:39.357478   51264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:54:39.357513   51264 buildroot.go:174] setting up certificates
	I0703 23:54:39.357527   51264 provision.go:84] configureAuth start
	I0703 23:54:39.357544   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0703 23:54:39.357847   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0703 23:54:39.360619   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.361014   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.361060   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.361261   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.363800   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.364151   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.364177   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.364309   51264 provision.go:143] copyHostCerts
	I0703 23:54:39.364388   51264 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:54:39.364401   51264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:54:39.364481   51264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:54:39.364625   51264 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:54:39.364636   51264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:54:39.364674   51264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:54:39.364769   51264 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:54:39.364780   51264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:54:39.364820   51264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:54:39.364906   51264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-652205 san=[127.0.0.1 192.168.39.204 kubernetes-upgrade-652205 localhost minikube]
	I0703 23:54:39.736393   51264 provision.go:177] copyRemoteCerts
	I0703 23:54:39.736450   51264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:39.736482   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.739274   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.739615   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.739651   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.739785   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:39.740024   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.740198   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:39.740346   51264 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0703 23:54:39.827069   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:54:39.859093   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0703 23:54:39.884638   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:54:39.909194   51264 provision.go:87] duration metric: took 551.652473ms to configureAuth
	I0703 23:54:39.909220   51264 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:54:39.909400   51264 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0703 23:54:39.909487   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:39.912387   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.912767   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:39.912794   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:39.912964   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:39.913190   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.913349   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:39.913484   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:39.913669   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:39.913823   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:39.913839   51264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:54:40.188697   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:54:40.188746   51264 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:54:40.188761   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetURL
	I0703 23:54:40.190164   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | Using libvirt version 6000000
	I0703 23:54:40.192623   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.193053   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.193085   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.193245   51264 main.go:141] libmachine: Docker is up and running!
	I0703 23:54:40.193259   51264 main.go:141] libmachine: Reticulating splines...
	I0703 23:54:40.193268   51264 client.go:171] duration metric: took 26.906052307s to LocalClient.Create
	I0703 23:54:40.193297   51264 start.go:167] duration metric: took 26.906124158s to libmachine.API.Create "kubernetes-upgrade-652205"
	I0703 23:54:40.193310   51264 start.go:293] postStartSetup for "kubernetes-upgrade-652205" (driver="kvm2")
	I0703 23:54:40.193324   51264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:54:40.193349   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:40.193594   51264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:54:40.193618   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:40.196171   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.196543   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.196573   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.196731   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:40.196942   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:40.197119   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:40.197276   51264 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0703 23:54:40.283714   51264 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:54:40.288537   51264 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:54:40.288571   51264 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:54:40.288651   51264 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:54:40.288723   51264 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:54:40.288807   51264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:54:40.299681   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:54:40.325018   51264 start.go:296] duration metric: took 131.692337ms for postStartSetup
	I0703 23:54:40.325078   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetConfigRaw
	I0703 23:54:40.325666   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0703 23:54:40.328606   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.328941   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.328974   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.329173   51264 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/config.json ...
	I0703 23:54:40.329383   51264 start.go:128] duration metric: took 27.062331093s to createHost
	I0703 23:54:40.329407   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:40.331512   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.331832   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.331859   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.332064   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:40.332256   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:40.332414   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:40.332600   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:40.332747   51264 main.go:141] libmachine: Using SSH client type: native
	I0703 23:54:40.332902   51264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0703 23:54:40.332910   51264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0703 23:54:40.436873   51264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720050880.393922258
	
	I0703 23:54:40.436895   51264 fix.go:216] guest clock: 1720050880.393922258
	I0703 23:54:40.436902   51264 fix.go:229] Guest: 2024-07-03 23:54:40.393922258 +0000 UTC Remote: 2024-07-03 23:54:40.329395497 +0000 UTC m=+27.185417656 (delta=64.526761ms)
	I0703 23:54:40.436938   51264 fix.go:200] guest clock delta is within tolerance: 64.526761ms
	I0703 23:54:40.436949   51264 start.go:83] releasing machines lock for "kubernetes-upgrade-652205", held for 27.169987253s
	I0703 23:54:40.436993   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:40.437310   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0703 23:54:40.440258   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.440703   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.440733   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.440940   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:40.441510   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:40.441702   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:54:40.441782   51264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:54:40.441825   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:40.442138   51264 ssh_runner.go:195] Run: cat /version.json
	I0703 23:54:40.442161   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0703 23:54:40.445001   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.445029   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.445418   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.445447   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.445482   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:40.445501   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:40.445584   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:40.445788   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:40.445797   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0703 23:54:40.445937   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0703 23:54:40.446004   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:40.446085   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0703 23:54:40.446157   51264 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0703 23:54:40.446234   51264 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0703 23:54:40.557125   51264 ssh_runner.go:195] Run: systemctl --version
	I0703 23:54:40.564069   51264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:54:40.742318   51264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:54:40.750393   51264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:54:40.750474   51264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:54:40.772140   51264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:54:40.772164   51264 start.go:494] detecting cgroup driver to use...
	I0703 23:54:40.772233   51264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:54:40.796654   51264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:54:40.814648   51264 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:54:40.814700   51264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:54:40.831937   51264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:54:40.848795   51264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:54:40.995548   51264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:54:41.175484   51264 docker.go:233] disabling docker service ...
	I0703 23:54:41.175576   51264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:54:41.190545   51264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:54:41.204849   51264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:54:41.319213   51264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:54:41.437383   51264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:54:41.453743   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:54:41.473972   51264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0703 23:54:41.474043   51264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:54:41.485362   51264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:54:41.485427   51264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:54:41.497305   51264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:54:41.508934   51264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:54:41.520307   51264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:54:41.532151   51264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:54:41.542696   51264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:54:41.542767   51264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:54:41.557836   51264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:54:41.573090   51264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:54:41.700975   51264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:54:41.845362   51264 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:54:41.845441   51264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:54:41.850708   51264 start.go:562] Will wait 60s for crictl version
	I0703 23:54:41.850773   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:41.854934   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:54:41.899980   51264 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:54:41.900071   51264 ssh_runner.go:195] Run: crio --version
	I0703 23:54:41.931837   51264 ssh_runner.go:195] Run: crio --version
	I0703 23:54:41.976094   51264 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0703 23:54:41.977574   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0703 23:54:41.980845   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:41.981341   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:54:27 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0703 23:54:41.981377   51264 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0703 23:54:41.981630   51264 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 23:54:41.986364   51264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:54:42.001119   51264 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:54:42.001239   51264 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 23:54:42.001301   51264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:54:42.040960   51264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0703 23:54:42.041039   51264 ssh_runner.go:195] Run: which lz4
	I0703 23:54:42.047514   51264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0703 23:54:42.052128   51264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:54:42.052168   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0703 23:54:43.888038   51264 crio.go:462] duration metric: took 1.840567712s to copy over tarball
	I0703 23:54:43.888117   51264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:54:46.557558   51264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669406956s)
	I0703 23:54:46.557596   51264 crio.go:469] duration metric: took 2.669529274s to extract the tarball
	I0703 23:54:46.557605   51264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:54:46.601982   51264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:54:46.650910   51264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0703 23:54:46.650933   51264 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0703 23:54:46.650973   51264 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:54:46.651007   51264 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0703 23:54:46.651036   51264 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0703 23:54:46.651069   51264 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0703 23:54:46.651100   51264 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0703 23:54:46.651142   51264 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0703 23:54:46.651181   51264 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0703 23:54:46.651197   51264 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0703 23:54:46.652490   51264 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:54:46.652490   51264 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0703 23:54:46.652511   51264 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0703 23:54:46.652514   51264 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0703 23:54:46.652515   51264 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0703 23:54:46.652515   51264 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0703 23:54:46.652548   51264 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0703 23:54:46.652863   51264 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0703 23:54:46.817922   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0703 23:54:46.832026   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0703 23:54:46.844141   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0703 23:54:46.865217   51264 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0703 23:54:46.865256   51264 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0703 23:54:46.865295   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:46.904135   51264 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0703 23:54:46.904177   51264 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0703 23:54:46.904245   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:46.923782   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0703 23:54:46.923786   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0703 23:54:46.923938   51264 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0703 23:54:46.923981   51264 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0703 23:54:46.924021   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:46.941424   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0703 23:54:46.981378   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0703 23:54:46.982728   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0703 23:54:46.982838   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0703 23:54:47.015588   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0703 23:54:47.018258   51264 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0703 23:54:47.018305   51264 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0703 23:54:47.018363   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:47.020061   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0703 23:54:47.028509   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0703 23:54:47.076612   51264 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0703 23:54:47.076704   51264 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0703 23:54:47.076741   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:47.076662   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0703 23:54:47.082695   51264 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0703 23:54:47.082736   51264 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0703 23:54:47.082779   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:47.086398   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0703 23:54:47.090698   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0703 23:54:47.128306   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0703 23:54:47.128347   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0703 23:54:47.165975   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0703 23:54:47.184094   51264 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0703 23:54:47.184142   51264 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0703 23:54:47.184197   51264 ssh_runner.go:195] Run: which crictl
	I0703 23:54:47.187014   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0703 23:54:47.189002   51264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0703 23:54:47.226473   51264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0703 23:54:47.598735   51264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 23:54:47.742108   51264 cache_images.go:92] duration metric: took 1.091153981s to LoadCachedImages
	W0703 23:54:47.742198   51264 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0703 23:54:47.742214   51264 kubeadm.go:928] updating node { 192.168.39.204 8443 v1.20.0 crio true true} ...
	I0703 23:54:47.742340   51264 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-652205 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:54:47.742426   51264 ssh_runner.go:195] Run: crio config
	I0703 23:54:47.790689   51264 cni.go:84] Creating CNI manager for ""
	I0703 23:54:47.790721   51264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:54:47.790737   51264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:54:47.790760   51264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-652205 NodeName:kubernetes-upgrade-652205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0703 23:54:47.790897   51264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-652205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:54:47.790956   51264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0703 23:54:47.801653   51264 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:54:47.801731   51264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:54:47.811911   51264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0703 23:54:47.830319   51264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:54:47.849087   51264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0703 23:54:47.869471   51264 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I0703 23:54:47.873976   51264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:54:47.887780   51264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:54:48.019626   51264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:54:48.037616   51264 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205 for IP: 192.168.39.204
	I0703 23:54:48.037643   51264 certs.go:194] generating shared ca certs ...
	I0703 23:54:48.037663   51264 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.037837   51264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:54:48.037908   51264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:54:48.037922   51264 certs.go:256] generating profile certs ...
	I0703 23:54:48.037995   51264 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.key
	I0703 23:54:48.038021   51264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.crt with IP's: []
	I0703 23:54:48.236766   51264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.crt ...
	I0703 23:54:48.236795   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.crt: {Name:mk2f9bf7f3f415d3c6f2bd01a3391249a8688ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.237007   51264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.key ...
	I0703 23:54:48.237027   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.key: {Name:mk02a47ea9e8bf9765701ffb4f19355ec7850a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.237142   51264 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key.fbd595f4
	I0703 23:54:48.237168   51264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt.fbd595f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I0703 23:54:48.433130   51264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt.fbd595f4 ...
	I0703 23:54:48.433161   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt.fbd595f4: {Name:mk5448ab08ba453260e54e8bb2c0d8d4d97a5ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.433347   51264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key.fbd595f4 ...
	I0703 23:54:48.433365   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key.fbd595f4: {Name:mk468b7d5601b5fb67379402809319ab1eeb11d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.433464   51264 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt.fbd595f4 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt
	I0703 23:54:48.433536   51264 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key.fbd595f4 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key
	I0703 23:54:48.433588   51264 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key
	I0703 23:54:48.433603   51264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.crt with IP's: []
	I0703 23:54:48.564166   51264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.crt ...
	I0703 23:54:48.564196   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.crt: {Name:mk6b493b6e7be002dc7fd6aa60a01165dec3704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.564367   51264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key ...
	I0703 23:54:48.564383   51264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key: {Name:mkda0fc7629900ef517b6d87dfbcd63887c0bee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:54:48.564574   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:54:48.564612   51264 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:54:48.564622   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:54:48.564641   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:54:48.564662   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:54:48.564681   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:54:48.564718   51264 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:54:48.565240   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:54:48.593864   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:54:48.621798   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:54:48.649814   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:54:48.677506   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0703 23:54:48.706075   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:54:48.734185   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:54:48.760509   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0703 23:54:48.788252   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:54:48.816579   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:54:48.844332   51264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:54:48.880383   51264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:54:48.899609   51264 ssh_runner.go:195] Run: openssl version
	I0703 23:54:48.906646   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:54:48.925013   51264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:54:48.930521   51264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:54:48.930598   51264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:54:48.937306   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:54:48.950236   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:54:48.964374   51264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:54:48.970098   51264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:54:48.970189   51264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:54:48.980263   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:54:48.992614   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:54:49.006158   51264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:54:49.011474   51264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:54:49.011536   51264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:54:49.017529   51264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:54:49.028987   51264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:54:49.033413   51264 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:54:49.033468   51264 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:54:49.033531   51264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:54:49.033603   51264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:54:49.073914   51264 cri.go:89] found id: ""
	I0703 23:54:49.073983   51264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:54:49.084362   51264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:54:49.094628   51264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:54:49.104909   51264 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:54:49.104927   51264 kubeadm.go:156] found existing configuration files:
	
	I0703 23:54:49.104984   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:54:49.114536   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:54:49.114603   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:54:49.124508   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:54:49.134108   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:54:49.134179   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:54:49.145082   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:54:49.154681   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:54:49.154748   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:54:49.165320   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:54:49.175606   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:54:49.175679   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:54:49.186841   51264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:54:49.305911   51264 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0703 23:54:49.305990   51264 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:54:49.472992   51264 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:54:49.473153   51264 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:54:49.473303   51264 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:54:49.670249   51264 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:54:49.672358   51264 out.go:204]   - Generating certificates and keys ...
	I0703 23:54:49.672484   51264 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:54:49.672603   51264 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:54:49.929393   51264 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:54:50.339130   51264 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:54:50.770479   51264 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:54:50.923960   51264 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:54:50.999789   51264 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:54:51.000038   51264 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0703 23:54:51.266699   51264 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:54:51.266885   51264 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0703 23:54:51.423571   51264 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:54:51.636962   51264 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:54:51.828712   51264 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:54:51.828982   51264 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:54:51.960944   51264 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:54:52.503785   51264 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:54:52.754081   51264 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:54:52.881060   51264 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:54:52.900300   51264 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:54:52.901955   51264 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:54:52.902049   51264 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:54:53.027433   51264 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:54:53.029200   51264 out.go:204]   - Booting up control plane ...
	I0703 23:54:53.029296   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:54:53.031141   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:54:53.033041   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:54:53.033958   51264 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:54:53.038149   51264 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0703 23:55:33.000128   51264 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0703 23:55:33.000849   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:55:33.001079   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:55:38.000485   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:55:38.000765   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:55:47.999988   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:55:48.000240   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:56:08.000158   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:56:08.000449   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:56:47.999243   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:56:47.999450   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:56:47.999469   51264 kubeadm.go:309] 
	I0703 23:56:47.999507   51264 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0703 23:56:47.999540   51264 kubeadm.go:309] 		timed out waiting for the condition
	I0703 23:56:47.999544   51264 kubeadm.go:309] 
	I0703 23:56:47.999574   51264 kubeadm.go:309] 	This error is likely caused by:
	I0703 23:56:47.999601   51264 kubeadm.go:309] 		- The kubelet is not running
	I0703 23:56:47.999757   51264 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0703 23:56:47.999773   51264 kubeadm.go:309] 
	I0703 23:56:47.999932   51264 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0703 23:56:47.999963   51264 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0703 23:56:47.999991   51264 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0703 23:56:47.999998   51264 kubeadm.go:309] 
	I0703 23:56:48.000166   51264 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0703 23:56:48.000302   51264 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0703 23:56:48.000318   51264 kubeadm.go:309] 
	I0703 23:56:48.000452   51264 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0703 23:56:48.000580   51264 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0703 23:56:48.000684   51264 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0703 23:56:48.000778   51264 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0703 23:56:48.000790   51264 kubeadm.go:309] 
	I0703 23:56:48.000964   51264 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:56:48.001080   51264 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0703 23:56:48.001186   51264 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0703 23:56:48.001363   51264 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-652205 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0703 23:56:48.001425   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0703 23:56:50.177329   51264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.175874264s)
	I0703 23:56:50.177412   51264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:56:50.194796   51264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:56:50.207409   51264 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:56:50.207435   51264 kubeadm.go:156] found existing configuration files:
	
	I0703 23:56:50.207482   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:56:50.217975   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:56:50.218071   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:56:50.228864   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:56:50.238561   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:56:50.238630   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:56:50.248764   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:56:50.258645   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:56:50.258709   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:56:50.269017   51264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:56:50.280507   51264 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:56:50.280585   51264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:56:50.293396   51264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:56:50.375702   51264 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0703 23:56:50.375776   51264 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:56:50.532138   51264 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:56:50.532300   51264 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:56:50.532436   51264 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:56:50.736218   51264 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:56:50.738243   51264 out.go:204]   - Generating certificates and keys ...
	I0703 23:56:50.738353   51264 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:56:50.738452   51264 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:56:50.738573   51264 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0703 23:56:50.738667   51264 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0703 23:56:50.738767   51264 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0703 23:56:50.738864   51264 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0703 23:56:50.738952   51264 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0703 23:56:50.739053   51264 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0703 23:56:50.739181   51264 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0703 23:56:50.739317   51264 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0703 23:56:50.739385   51264 kubeadm.go:309] [certs] Using the existing "sa" key
	I0703 23:56:50.739471   51264 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:56:50.910869   51264 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:56:51.122399   51264 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:56:51.327346   51264 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:56:51.673481   51264 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:56:51.690476   51264 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:56:51.691701   51264 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:56:51.691780   51264 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:56:51.844614   51264 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:56:51.846867   51264 out.go:204]   - Booting up control plane ...
	I0703 23:56:51.847000   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:56:51.858955   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:56:51.861510   51264 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:56:51.862787   51264 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:56:51.866038   51264 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0703 23:57:31.867982   51264 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0703 23:57:31.868111   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:57:31.868317   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:57:36.869363   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:57:36.869687   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:57:46.870282   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:57:46.870514   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:58:06.872101   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:58:06.872383   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:58:46.871343   51264 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0703 23:58:46.871652   51264 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0703 23:58:46.871674   51264 kubeadm.go:309] 
	I0703 23:58:46.871729   51264 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0703 23:58:46.871774   51264 kubeadm.go:309] 		timed out waiting for the condition
	I0703 23:58:46.871780   51264 kubeadm.go:309] 
	I0703 23:58:46.871827   51264 kubeadm.go:309] 	This error is likely caused by:
	I0703 23:58:46.871869   51264 kubeadm.go:309] 		- The kubelet is not running
	I0703 23:58:46.872063   51264 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0703 23:58:46.872086   51264 kubeadm.go:309] 
	I0703 23:58:46.872244   51264 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0703 23:58:46.872292   51264 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0703 23:58:46.872343   51264 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0703 23:58:46.872353   51264 kubeadm.go:309] 
	I0703 23:58:46.872505   51264 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0703 23:58:46.872615   51264 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0703 23:58:46.872625   51264 kubeadm.go:309] 
	I0703 23:58:46.872769   51264 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0703 23:58:46.872880   51264 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0703 23:58:46.872983   51264 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0703 23:58:46.873073   51264 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0703 23:58:46.873088   51264 kubeadm.go:309] 
	I0703 23:58:46.874207   51264 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:58:46.874321   51264 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0703 23:58:46.874433   51264 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0703 23:58:46.874541   51264 kubeadm.go:393] duration metric: took 3m57.841075289s to StartCluster
	I0703 23:58:46.874593   51264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0703 23:58:46.874662   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0703 23:58:46.935731   51264 cri.go:89] found id: ""
	I0703 23:58:46.935759   51264 logs.go:276] 0 containers: []
	W0703 23:58:46.935770   51264 logs.go:278] No container was found matching "kube-apiserver"
	I0703 23:58:46.935778   51264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0703 23:58:46.935841   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0703 23:58:46.978002   51264 cri.go:89] found id: ""
	I0703 23:58:46.978031   51264 logs.go:276] 0 containers: []
	W0703 23:58:46.978041   51264 logs.go:278] No container was found matching "etcd"
	I0703 23:58:46.978049   51264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0703 23:58:46.978109   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0703 23:58:47.015123   51264 cri.go:89] found id: ""
	I0703 23:58:47.015154   51264 logs.go:276] 0 containers: []
	W0703 23:58:47.015166   51264 logs.go:278] No container was found matching "coredns"
	I0703 23:58:47.015173   51264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0703 23:58:47.015237   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0703 23:58:47.054354   51264 cri.go:89] found id: ""
	I0703 23:58:47.054383   51264 logs.go:276] 0 containers: []
	W0703 23:58:47.054395   51264 logs.go:278] No container was found matching "kube-scheduler"
	I0703 23:58:47.054402   51264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0703 23:58:47.054470   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0703 23:58:47.098070   51264 cri.go:89] found id: ""
	I0703 23:58:47.098103   51264 logs.go:276] 0 containers: []
	W0703 23:58:47.098114   51264 logs.go:278] No container was found matching "kube-proxy"
	I0703 23:58:47.098127   51264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0703 23:58:47.098194   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0703 23:58:47.148568   51264 cri.go:89] found id: ""
	I0703 23:58:47.148602   51264 logs.go:276] 0 containers: []
	W0703 23:58:47.148613   51264 logs.go:278] No container was found matching "kube-controller-manager"
	I0703 23:58:47.148621   51264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0703 23:58:47.148689   51264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0703 23:58:47.192237   51264 cri.go:89] found id: ""
	I0703 23:58:47.192267   51264 logs.go:276] 0 containers: []
	W0703 23:58:47.192277   51264 logs.go:278] No container was found matching "kindnet"
	I0703 23:58:47.192288   51264 logs.go:123] Gathering logs for dmesg ...
	I0703 23:58:47.192304   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0703 23:58:47.208752   51264 logs.go:123] Gathering logs for describe nodes ...
	I0703 23:58:47.208781   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0703 23:58:47.336222   51264 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0703 23:58:47.336247   51264 logs.go:123] Gathering logs for CRI-O ...
	I0703 23:58:47.336263   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0703 23:58:47.439079   51264 logs.go:123] Gathering logs for container status ...
	I0703 23:58:47.439119   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0703 23:58:47.486358   51264 logs.go:123] Gathering logs for kubelet ...
	I0703 23:58:47.486388   51264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0703 23:58:47.544269   51264 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0703 23:58:47.544317   51264 out.go:239] * 
	* 
	W0703 23:58:47.544374   51264 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0703 23:58:47.544396   51264 out.go:239] * 
	* 
	W0703 23:58:47.545209   51264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0703 23:58:47.548339   51264 out.go:177] 
	W0703 23:58:47.549481   51264 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0703 23:58:47.549526   51264 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0703 23:58:47.549557   51264 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0703 23:58:47.550857   51264 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-652205
I0703 23:58:48.175034   16574 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 23:58:48.175156   16574 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0703 23:58:48.203281   16574 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0703 23:58:48.203316   16574 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0703 23:58:48.203390   16574 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0703 23:58:48.203428   16574 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3431246110/002/docker-machine-driver-kvm2
I0703 23:58:48.259699   16574 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3431246110/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0] Decompressors:map[bz2:0xc0004cf300 gz:0xc0004cf308 tar:0xc0004cf2b0 tar.bz2:0xc0004cf2c0 tar.gz:0xc0004cf2d0 tar.xz:0xc0004cf2e0 tar.zst:0xc0004cf2f0 tbz2:0xc0004cf2c0 tgz:0xc0004cf2d0 txz:0xc0004cf2e0 tzst:0xc0004cf2f0 xz:0xc0004cf310 zip:0xc0004cf320 zst:0xc0004cf318] Getters:map[file:0xc0006328b0 http:0xc000b84320 https:0xc000b84370] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0703 23:58:48.259750   16574 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3431246110/002/docker-machine-driver-kvm2
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-652205: (2.52623249s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-652205 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-652205 status --format={{.Host}}: exit status 7 (76.464517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0703 23:58:57.357542   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.304775015s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-652205 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (78.909956ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-652205
	    minikube start -p kubernetes-upgrade-652205 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6522052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-652205 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-652205 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.790111139s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-04 00:01:28.44453311 +0000 UTC m=+4481.377770874
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-652205 -n kubernetes-upgrade-652205
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-652205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-652205 logs -n 25: (2.144639556s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-676605 sudo                  | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat              | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat              | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                  | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                  | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                  | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo find             | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo crio             | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-676605                       | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p cert-expiration-979438              | cert-expiration-979438    | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:59 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-175902            | force-systemd-env-175902  | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| stop    | -p kubernetes-upgrade-652205           | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p force-systemd-flag-163167           | force-systemd-flag-163167 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:59 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652205           | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 04 Jul 24 00:00 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-672261                        | pause-672261              | jenkins | v1.33.1 | 03 Jul 24 23:59 UTC | 03 Jul 24 23:59 UTC |
	| start   | -p cert-options-768841                 | cert-options-768841       | jenkins | v1.33.1 | 03 Jul 24 23:59 UTC | 04 Jul 24 00:00 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-163167 ssh cat      | force-systemd-flag-163167 | jenkins | v1.33.1 | 03 Jul 24 23:59 UTC | 03 Jul 24 23:59 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-163167           | force-systemd-flag-163167 | jenkins | v1.33.1 | 03 Jul 24 23:59 UTC | 03 Jul 24 23:59 UTC |
	| start   | -p old-k8s-version-979033              | old-k8s-version-979033    | jenkins | v1.33.1 | 03 Jul 24 23:59 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652205           | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652205           | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:01 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-768841 ssh                | cert-options-768841       | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-768841 -- sudo         | cert-options-768841       | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-768841                 | cert-options-768841       | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                   | no-preload-317739         | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:00:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:00:34.401630   59355 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:00:34.401889   59355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:00:34.401901   59355 out.go:304] Setting ErrFile to fd 2...
	I0704 00:00:34.401907   59355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:00:34.402156   59355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:00:34.402768   59355 out.go:298] Setting JSON to false
	I0704 00:00:34.403764   59355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6174,"bootTime":1720045060,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:00:34.403833   59355 start.go:139] virtualization: kvm guest
	I0704 00:00:34.406524   59355 out.go:177] * [no-preload-317739] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:00:34.408016   59355 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:00:34.408017   59355 notify.go:220] Checking for updates...
	I0704 00:00:34.409552   59355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:00:34.411075   59355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:00:34.412606   59355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:00:34.414116   59355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:00:34.415636   59355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:00:34.417507   59355 config.go:182] Loaded profile config "cert-expiration-979438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:00:34.417601   59355 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:00:34.417684   59355 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:00:34.417767   59355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:00:34.458545   59355 out.go:177] * Using the kvm2 driver based on user configuration
	I0704 00:00:34.460121   59355 start.go:297] selected driver: kvm2
	I0704 00:00:34.460145   59355 start.go:901] validating driver "kvm2" against <nil>
	I0704 00:00:34.460162   59355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:00:34.461215   59355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.461326   59355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:00:34.479127   59355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:00:34.479175   59355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0704 00:00:34.479405   59355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:00:34.479478   59355 cni.go:84] Creating CNI manager for ""
	I0704 00:00:34.479496   59355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:00:34.479511   59355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0704 00:00:34.479605   59355 start.go:340] cluster config:
	{Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:00:34.479711   59355 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.481670   59355 out.go:177] * Starting "no-preload-317739" primary control-plane node in "no-preload-317739" cluster
	I0704 00:00:35.935999   58854 start.go:364] duration metric: took 34.135918161s to acquireMachinesLock for "kubernetes-upgrade-652205"
	I0704 00:00:35.936052   58854 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:00:35.936074   58854 fix.go:54] fixHost starting: 
	I0704 00:00:35.936514   58854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:00:35.936561   58854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:00:35.957492   58854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0704 00:00:35.957927   58854 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:00:35.958882   58854 main.go:141] libmachine: Using API Version  1
	I0704 00:00:35.959039   58854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:00:35.959618   58854 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:00:35.959993   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:35.960158   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetState
	I0704 00:00:35.962012   58854 fix.go:112] recreateIfNeeded on kubernetes-upgrade-652205: state=Running err=<nil>
	W0704 00:00:35.962056   58854 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:00:35.964661   58854 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-652205" VM ...
	I0704 00:00:34.193082   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.193582   58668 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:00:34.193603   58668 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:00:34.193615   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.194129   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033
	I0704 00:00:34.287211   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:00:34.287243   58668 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:00:34.287256   58668 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:00:34.290658   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.291202   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.291237   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.291488   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:00:34.291517   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:00:34.291544   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:00:34.291558   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:00:34.291576   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:00:34.424824   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:00:34.425131   58668 main.go:141] libmachine: (old-k8s-version-979033) KVM machine creation complete!
	I0704 00:00:34.425493   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:00:34.426009   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:34.426260   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:34.426445   58668 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:00:34.426474   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:00:34.428029   58668 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:00:34.428044   58668 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:00:34.428052   58668 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:00:34.428059   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.430691   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.431087   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.431132   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.431296   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.431491   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.431650   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.431805   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.432028   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.432321   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.432338   58668 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:00:34.539462   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:00:34.539484   58668 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:00:34.539495   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.542732   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.543162   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.543203   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.543442   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.543670   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.543844   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.544009   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.544159   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.544368   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.544383   58668 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:00:34.649075   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:00:34.649229   58668 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:00:34.649277   58668 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:00:34.649296   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.649574   58668 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:00:34.649616   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.649828   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.653644   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.654057   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.654087   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.654298   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.654528   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.654687   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.654837   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.655006   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.655196   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.655211   58668 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:00:34.774694   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:00:34.774730   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.778383   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.778792   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.778819   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.779144   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.779431   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.779665   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.779835   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.780051   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.780283   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.780311   58668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:00:34.896690   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:00:34.896722   58668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:00:34.896770   58668 buildroot.go:174] setting up certificates
	I0704 00:00:34.896782   58668 provision.go:84] configureAuth start
	I0704 00:00:34.896798   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.897094   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:34.900289   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.900680   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.900722   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.900922   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.903648   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.904043   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.904074   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.904259   58668 provision.go:143] copyHostCerts
	I0704 00:00:34.904321   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:00:34.904333   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:00:34.904390   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:00:34.904493   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:00:34.904502   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:00:34.904522   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:00:34.904593   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:00:34.904601   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:00:34.904617   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:00:34.904689   58668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:00:35.181466   58668 provision.go:177] copyRemoteCerts
	I0704 00:00:35.181532   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:00:35.181563   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.184683   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.185081   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.185111   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.185300   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.185530   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.185673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.185805   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.271503   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:00:35.299933   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:00:35.330063   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:00:35.357363   58668 provision.go:87] duration metric: took 460.563889ms to configureAuth
	I0704 00:00:35.357393   58668 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:00:35.357589   58668 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:00:35.357657   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.360333   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.360775   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.360809   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.361013   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.361262   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.361428   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.361577   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.361749   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:35.361929   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:35.361950   58668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:00:35.652917   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:00:35.652950   58668 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:00:35.652961   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetURL
	I0704 00:00:35.654259   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using libvirt version 6000000
	I0704 00:00:35.656886   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.657471   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.657514   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.657660   58668 main.go:141] libmachine: Docker is up and running!
	I0704 00:00:35.657680   58668 main.go:141] libmachine: Reticulating splines...
	I0704 00:00:35.657689   58668 client.go:171] duration metric: took 24.596057721s to LocalClient.Create
	I0704 00:00:35.657718   58668 start.go:167] duration metric: took 24.596150696s to libmachine.API.Create "old-k8s-version-979033"
	I0704 00:00:35.657729   58668 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:00:35.657741   58668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:00:35.657759   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.658068   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:00:35.658096   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.660695   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.661057   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.661090   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.661228   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.661464   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.661673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.661914   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.743345   58668 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:00:35.748645   58668 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:00:35.748676   58668 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:00:35.748765   58668 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:00:35.748855   58668 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:00:35.748962   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:00:35.761598   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:35.798851   58668 start.go:296] duration metric: took 141.105745ms for postStartSetup
	I0704 00:00:35.798934   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:00:35.799748   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:35.803424   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.803835   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.803866   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.804157   58668 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:00:35.804400   58668 start.go:128] duration metric: took 24.774599729s to createHost
	I0704 00:00:35.804426   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.807787   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.808505   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.808530   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.808863   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.809112   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.809306   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.809479   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.809710   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:35.809942   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:35.809975   58668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:00:35.935777   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051235.907716412
	
	I0704 00:00:35.935804   58668 fix.go:216] guest clock: 1720051235.907716412
	I0704 00:00:35.935813   58668 fix.go:229] Guest: 2024-07-04 00:00:35.907716412 +0000 UTC Remote: 2024-07-04 00:00:35.804412433 +0000 UTC m=+49.322768963 (delta=103.303979ms)
	I0704 00:00:35.935859   58668 fix.go:200] guest clock delta is within tolerance: 103.303979ms
	I0704 00:00:35.935865   58668 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 24.906272196s
	I0704 00:00:35.935966   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.936814   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:35.941084   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.941480   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.941520   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.941865   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.942837   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.943050   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.943137   58668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:00:35.943177   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.943748   58668 ssh_runner.go:195] Run: cat /version.json
	I0704 00:00:35.943811   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.947102   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.947522   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.947552   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.947673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.947830   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.947980   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.948093   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.948380   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.949076   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.949120   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.949080   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.949301   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.949506   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.949673   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:36.052844   58668 ssh_runner.go:195] Run: systemctl --version
	I0704 00:00:36.061480   58668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:00:36.258318   58668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:00:36.264963   58668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:00:36.265044   58668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:00:36.288798   58668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:00:36.288828   58668 start.go:494] detecting cgroup driver to use...
	I0704 00:00:36.288957   58668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:00:36.312074   58668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:00:36.333588   58668 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:00:36.333654   58668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:00:36.350147   58668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:00:36.366618   58668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:00:36.522633   58668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:00:35.966598   58854 machine.go:94] provisionDockerMachine start ...
	I0704 00:00:35.966630   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:35.966920   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:35.970413   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:35.970985   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:35.971015   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:35.971232   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:35.972799   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:35.973155   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:35.973319   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:35.973475   58854 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:35.973698   58854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0704 00:00:35.973713   58854 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:00:36.107920   58854 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-652205
	
	I0704 00:00:36.107952   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0704 00:00:36.108195   58854 buildroot.go:166] provisioning hostname "kubernetes-upgrade-652205"
	I0704 00:00:36.108232   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0704 00:00:36.108839   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:36.113205   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.113659   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.113704   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.114143   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:36.114338   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.114576   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.114913   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:36.115184   58854 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:36.115391   58854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0704 00:00:36.115406   58854 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-652205 && echo "kubernetes-upgrade-652205" | sudo tee /etc/hostname
	I0704 00:00:36.282815   58854 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-652205
	
	I0704 00:00:36.282852   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:36.287017   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.287586   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.287629   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.288050   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:36.288261   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.288461   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.288646   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:36.288874   58854 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:36.289084   58854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0704 00:00:36.289101   58854 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-652205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-652205/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-652205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:00:36.418059   58854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:00:36.418095   58854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:00:36.418119   58854 buildroot.go:174] setting up certificates
	I0704 00:00:36.418133   58854 provision.go:84] configureAuth start
	I0704 00:00:36.418145   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetMachineName
	I0704 00:00:36.418435   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0704 00:00:36.421653   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.422067   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.422113   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.422242   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:36.425042   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.425506   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.425542   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.425727   58854 provision.go:143] copyHostCerts
	I0704 00:00:36.425811   58854 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:00:36.425823   58854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:00:36.425898   58854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:00:36.426047   58854 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:00:36.426061   58854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:00:36.426092   58854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:00:36.426205   58854 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:00:36.426218   58854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:00:36.426247   58854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:00:36.426347   58854 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-652205 san=[127.0.0.1 192.168.39.204 kubernetes-upgrade-652205 localhost minikube]
	I0704 00:00:36.550424   58854 provision.go:177] copyRemoteCerts
	I0704 00:00:36.550485   58854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:00:36.550511   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:36.553664   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.554076   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.554107   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.554350   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:36.554573   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.554758   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:36.554955   58854 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0704 00:00:36.651401   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0704 00:00:36.682846   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:00:36.698779   58668 docker.go:233] disabling docker service ...
	I0704 00:00:36.698852   58668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:00:36.714705   58668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:00:36.730437   58668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:00:36.880225   58668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:00:37.003093   58668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:00:37.019184   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:00:37.041860   58668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:00:37.041942   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.054186   58668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:00:37.054266   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.066575   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.078385   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.090041   58668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:00:37.101810   58668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:00:37.112061   58668 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:00:37.112123   58668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:00:37.126731   58668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:00:37.137332   58668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:37.253842   58668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:00:37.399126   58668 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:00:37.399202   58668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:00:37.405147   58668 start.go:562] Will wait 60s for crictl version
	I0704 00:00:37.405228   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:37.410118   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:00:37.454702   58668 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:00:37.454799   58668 ssh_runner.go:195] Run: crio --version
	I0704 00:00:37.486440   58668 ssh_runner.go:195] Run: crio --version
	I0704 00:00:37.523250   58668 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:00:34.483276   59355 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:00:34.483404   59355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:00:34.483441   59355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json: {Name:mkdee58bb5b335a2c368114da67c1be33a5a0ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:34.483552   59355 cache.go:107] acquiring lock: {Name:mk49815f40defe04a66a3bf6e7d0884f78fbbc0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483587   59355 cache.go:107] acquiring lock: {Name:mk85db2648a1428e4fc1945f836cf5ed14c1d14f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483656   59355 cache.go:115] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0704 00:00:34.483652   59355 cache.go:107] acquiring lock: {Name:mk6b0d4625605c3e01198f692103232589061a33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483682   59355 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 135.949µs
	I0704 00:00:34.483678   59355 cache.go:107] acquiring lock: {Name:mk961a888edfa5a4fcccec3e164a68f3d889a567 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483712   59355 cache.go:107] acquiring lock: {Name:mke6647311b93399ad86b1c82dc3b3189716238d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483745   59355 cache.go:107] acquiring lock: {Name:mk5feaf2f19d5cd7716d26da430a7e5956e4e94e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483617   59355 cache.go:107] acquiring lock: {Name:mk64418df43f5e9ac8dfef0e00e43ebb296f4dca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483747   59355 cache.go:107] acquiring lock: {Name:mkab2e27e469b7ef53d6328457476de633df4e40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:00:34.483796   59355 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:00:34.483825   59355 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:00:34.483697   59355 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0704 00:00:34.483859   59355 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:00:34.483943   59355 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:00:34.483972   59355 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:00:34.483983   59355 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:00:34.483943   59355 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:00:34.484233   59355 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:00:34.485261   59355 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:00:34.485290   59355 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:00:34.485288   59355 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:00:34.485288   59355 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:00:34.485294   59355 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:00:34.485262   59355 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:00:34.485362   59355 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:00:34.634905   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:00:34.647635   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:00:34.649468   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:00:34.649653   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0704 00:00:34.650534   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:00:34.713243   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:00:34.716424   59355 cache.go:162] opening:  /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:00:34.726227   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0704 00:00:34.726257   59355 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 242.557661ms
	I0704 00:00:34.726268   59355 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0704 00:00:34.982736   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0704 00:00:34.982764   59355 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2" took 499.189133ms
	I0704 00:00:34.982775   59355 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0704 00:00:36.060935   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0704 00:00:36.060960   59355 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2" took 1.577303319s
	I0704 00:00:36.060969   59355 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0704 00:00:36.130055   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0704 00:00:36.130088   59355 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.646410728s
	I0704 00:00:36.130106   59355 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0704 00:00:36.188518   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0704 00:00:36.188548   59355 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 1.704845873s
	I0704 00:00:36.188559   59355 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0704 00:00:36.232920   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0704 00:00:36.232946   59355 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2" took 1.749381733s
	I0704 00:00:36.232957   59355 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0704 00:00:36.323203   59355 cache.go:157] /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0704 00:00:36.323232   59355 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2" took 1.839608643s
	I0704 00:00:36.323243   59355 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0704 00:00:36.323260   59355 cache.go:87] Successfully saved all images to host disk.
	I0704 00:00:37.524674   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:37.528321   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:37.528773   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:37.528806   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:37.529074   58668 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:00:37.533829   58668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:00:37.548193   58668 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:00:37.548296   58668 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:00:37.548341   58668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:37.581326   58668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:00:37.581394   58668 ssh_runner.go:195] Run: which lz4
	I0704 00:00:37.585683   58668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:00:37.590254   58668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:00:37.590295   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:00:39.382825   58668 crio.go:462] duration metric: took 1.797183726s to copy over tarball
	I0704 00:00:39.382905   58668 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:00:36.717610   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:00:36.748510   58854 provision.go:87] duration metric: took 330.362652ms to configureAuth
	I0704 00:00:36.748543   58854 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:00:36.748816   58854 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:00:36.748941   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:36.752015   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.752444   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:36.752482   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:36.752736   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:36.752993   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.753237   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:36.753438   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:36.753662   58854 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:36.753901   58854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0704 00:00:36.753937   58854 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:00:44.197512   59355 start.go:364] duration metric: took 9.713216454s to acquireMachinesLock for "no-preload-317739"
	I0704 00:00:44.197582   59355 start.go:93] Provisioning new machine with config: &{Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:00:44.197687   59355 start.go:125] createHost starting for "" (driver="kvm2")
	I0704 00:00:44.382095   59355 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0704 00:00:44.382330   59355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:00:44.382381   59355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:00:42.010567   58668 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.627639356s)
	I0704 00:00:42.010590   58668 crio.go:469] duration metric: took 2.62773709s to extract the tarball
	I0704 00:00:42.010596   58668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:00:42.055377   58668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:42.109077   58668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:00:42.109105   58668 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:00:42.109171   58668 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.109200   58668 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.109215   58668 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.109243   58668 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.109245   58668 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.109180   58668 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:42.109192   58668 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.109172   58668 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.110781   58668 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:00:42.110802   58668 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.110804   58668 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.111193   58668 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:42.111193   58668 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.245608   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.245692   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.271622   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.275561   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:00:42.282195   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.287075   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.294094   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.357424   58668 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:00:42.357477   58668 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.357526   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.357426   58668 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:00:42.357564   58668 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.357612   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.469161   58668 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:00:42.469224   58668 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.469235   58668 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:00:42.469276   58668 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:00:42.469291   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.469314   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475133   58668 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:00:42.475176   58668 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.475133   58668 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:00:42.475199   58668 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:00:42.475273   58668 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.475306   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.475223   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475229   58668 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.475359   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.475365   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475317   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.478790   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.478845   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:00:42.494284   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.597531   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:00:42.597555   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.597641   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.597698   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:00:42.608375   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:00:42.622388   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:00:42.622417   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:00:42.660625   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:00:42.660706   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:00:43.054517   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:43.202873   58668 cache_images.go:92] duration metric: took 1.09374743s to LoadCachedImages
	W0704 00:00:43.202972   58668 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0704 00:00:43.202986   58668 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:00:43.203135   58668 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:00:43.203223   58668 ssh_runner.go:195] Run: crio config
	I0704 00:00:43.253953   58668 cni.go:84] Creating CNI manager for ""
	I0704 00:00:43.253977   58668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:00:43.253991   58668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:00:43.254008   58668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:00:43.254130   58668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:00:43.254190   58668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:00:43.265103   58668 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:00:43.265192   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:00:43.276514   58668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:00:43.296018   58668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:00:43.315316   58668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:00:43.336710   58668 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:00:43.341342   58668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:00:43.355964   58668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:43.500843   58668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:00:43.524631   58668 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:00:43.524652   58668 certs.go:194] generating shared ca certs ...
	I0704 00:00:43.524671   58668 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.524848   58668 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:00:43.524902   58668 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:00:43.524912   58668 certs.go:256] generating profile certs ...
	I0704 00:00:43.524973   58668 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:00:43.524990   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt with IP's: []
	I0704 00:00:43.619765   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt ...
	I0704 00:00:43.619806   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: {Name:mk13943ef89de34563b29919cad0616fe1b722cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.620047   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key ...
	I0704 00:00:43.620067   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key: {Name:mkc6d8ee950b14185bbf145e473cc770da0d0701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.620172   58668 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:00:43.620197   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.59]
	I0704 00:00:43.891835   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 ...
	I0704 00:00:43.891893   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654: {Name:mkfe335fd2a0295f5a178250d5c91bcad947a780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.892118   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654 ...
	I0704 00:00:43.892135   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654: {Name:mk06732d285a768ca53c049e45b3db597235096e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.892249   58668 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt
	I0704 00:00:43.892354   58668 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key
	I0704 00:00:43.892430   58668 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:00:43.892452   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt with IP's: []
	I0704 00:00:44.099004   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt ...
	I0704 00:00:44.099034   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt: {Name:mk36429fdd458e014e892ad0f7c7c76835412fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:44.099234   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key ...
	I0704 00:00:44.099250   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key: {Name:mke3a67ecf41e139d9d452f615a814e40df9a677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:44.099464   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:00:44.099511   58668 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:00:44.099525   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:00:44.099557   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:00:44.099586   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:00:44.099619   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:00:44.099669   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:44.100311   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:00:44.131421   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:00:44.160085   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:00:44.189585   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:00:44.221528   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:00:44.252058   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:00:44.317570   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:00:44.380008   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:00:44.425503   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:00:44.458892   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:00:44.503812   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:00:44.546054   58668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:00:44.578740   58668 ssh_runner.go:195] Run: openssl version
	I0704 00:00:44.587478   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:00:44.613853   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.621728   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.621798   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.630611   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:00:44.645688   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:00:44.660551   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.667386   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.667446   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.674883   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:00:44.688293   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:00:44.702284   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.709447   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.709527   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.718476   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:00:44.734843   58668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:00:44.741247   58668 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:00:44.741307   58668 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:00:44.741403   58668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:00:44.741461   58668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:00:44.800262   58668 cri.go:89] found id: ""
	I0704 00:00:44.800349   58668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:00:44.817664   58668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:00:44.835091   58668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:00:44.852434   58668 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:00:44.852457   58668 kubeadm.go:156] found existing configuration files:
	
	I0704 00:00:44.852513   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:00:44.865473   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:00:44.865546   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:00:44.879692   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:00:44.892084   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:00:44.892149   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:00:44.905556   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:00:44.920377   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:00:44.920449   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:00:44.936126   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:00:44.951110   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:00:44.951174   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:00:44.965948   58668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:00:45.320735   58668 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:00:43.915508   58854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:00:43.915534   58854 machine.go:97] duration metric: took 7.948919165s to provisionDockerMachine
	I0704 00:00:43.915548   58854 start.go:293] postStartSetup for "kubernetes-upgrade-652205" (driver="kvm2")
	I0704 00:00:43.915560   58854 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:00:43.915584   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:43.915948   58854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:00:43.915984   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:43.918666   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:43.919062   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:43.919104   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:43.919226   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:43.919464   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:43.919648   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:43.919801   58854 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0704 00:00:44.020753   58854 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:00:44.025944   58854 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:00:44.025965   58854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:00:44.026016   58854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:00:44.026102   58854 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:00:44.026226   58854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:00:44.038814   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:44.069567   58854 start.go:296] duration metric: took 154.002879ms for postStartSetup
	I0704 00:00:44.069608   58854 fix.go:56] duration metric: took 8.133545848s for fixHost
	I0704 00:00:44.069636   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:44.072847   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.073241   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:44.073272   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.073484   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:44.073679   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:44.073858   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:44.074011   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:44.074209   58854 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:44.074432   58854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0704 00:00:44.074446   58854 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:00:44.197341   58854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051244.187934328
	
	I0704 00:00:44.197362   58854 fix.go:216] guest clock: 1720051244.187934328
	I0704 00:00:44.197371   58854 fix.go:229] Guest: 2024-07-04 00:00:44.187934328 +0000 UTC Remote: 2024-07-04 00:00:44.069612898 +0000 UTC m=+42.412004810 (delta=118.32143ms)
	I0704 00:00:44.197415   58854 fix.go:200] guest clock delta is within tolerance: 118.32143ms
	I0704 00:00:44.197424   58854 start.go:83] releasing machines lock for "kubernetes-upgrade-652205", held for 8.261394098s
	I0704 00:00:44.197480   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:44.197849   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0704 00:00:44.201329   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.201727   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:44.201754   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.201965   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:44.202590   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:44.202791   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0704 00:00:44.202918   58854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:00:44.202960   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:44.203055   58854 ssh_runner.go:195] Run: cat /version.json
	I0704 00:00:44.203085   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHHostname
	I0704 00:00:44.206221   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.206258   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.206693   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:44.206733   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:44.206762   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.206780   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:44.206856   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:44.206987   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHPort
	I0704 00:00:44.207079   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:44.207155   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHKeyPath
	I0704 00:00:44.207307   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:44.207482   58854 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0704 00:00:44.207497   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetSSHUsername
	I0704 00:00:44.207845   58854 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kubernetes-upgrade-652205/id_rsa Username:docker}
	I0704 00:00:44.320829   58854 ssh_runner.go:195] Run: systemctl --version
	I0704 00:00:44.327619   58854 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:00:44.508193   58854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:00:44.557570   58854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:00:44.557636   58854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:00:44.596853   58854 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0704 00:00:44.596878   58854 start.go:494] detecting cgroup driver to use...
	I0704 00:00:44.596940   58854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:00:44.632286   58854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:00:44.649824   58854 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:00:44.649893   58854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:00:44.673062   58854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:00:44.693179   58854 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:00:44.946748   58854 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:00:45.223330   58854 docker.go:233] disabling docker service ...
	I0704 00:00:45.223405   58854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:00:45.255334   58854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:00:45.434862   58854 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:00:45.723514   58854 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:00:46.021444   58854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:00:46.168315   58854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:00:46.290111   58854 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:00:46.290173   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.341967   58854 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:00:46.342045   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.425833   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.468147   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.632275   58854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:00:44.403302   59355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0704 00:00:44.403902   59355 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:00:44.404627   59355 main.go:141] libmachine: Using API Version  1
	I0704 00:00:44.404658   59355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:00:44.405091   59355 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:00:44.405413   59355 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:00:44.405597   59355 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:00:44.405838   59355 start.go:159] libmachine.API.Create for "no-preload-317739" (driver="kvm2")
	I0704 00:00:44.405870   59355 client.go:168] LocalClient.Create starting
	I0704 00:00:44.405928   59355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0704 00:00:44.405971   59355 main.go:141] libmachine: Decoding PEM data...
	I0704 00:00:44.405991   59355 main.go:141] libmachine: Parsing certificate...
	I0704 00:00:44.406053   59355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0704 00:00:44.406085   59355 main.go:141] libmachine: Decoding PEM data...
	I0704 00:00:44.406099   59355 main.go:141] libmachine: Parsing certificate...
	I0704 00:00:44.406123   59355 main.go:141] libmachine: Running pre-create checks...
	I0704 00:00:44.406135   59355 main.go:141] libmachine: (no-preload-317739) Calling .PreCreateCheck
	I0704 00:00:44.406511   59355 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:00:44.406921   59355 main.go:141] libmachine: Creating machine...
	I0704 00:00:44.406937   59355 main.go:141] libmachine: (no-preload-317739) Calling .Create
	I0704 00:00:44.407099   59355 main.go:141] libmachine: (no-preload-317739) Creating KVM machine...
	I0704 00:00:44.408683   59355 main.go:141] libmachine: (no-preload-317739) DBG | found existing default KVM network
	I0704 00:00:44.410037   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.409837   59453 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:41:7f} reservation:<nil>}
	I0704 00:00:44.410898   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.410790   59453 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:85:96} reservation:<nil>}
	I0704 00:00:44.412026   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.411946   59453 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000304420}
	I0704 00:00:44.412055   59355 main.go:141] libmachine: (no-preload-317739) DBG | created network xml: 
	I0704 00:00:44.412063   59355 main.go:141] libmachine: (no-preload-317739) DBG | <network>
	I0704 00:00:44.412073   59355 main.go:141] libmachine: (no-preload-317739) DBG |   <name>mk-no-preload-317739</name>
	I0704 00:00:44.412083   59355 main.go:141] libmachine: (no-preload-317739) DBG |   <dns enable='no'/>
	I0704 00:00:44.412091   59355 main.go:141] libmachine: (no-preload-317739) DBG |   
	I0704 00:00:44.412101   59355 main.go:141] libmachine: (no-preload-317739) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0704 00:00:44.412113   59355 main.go:141] libmachine: (no-preload-317739) DBG |     <dhcp>
	I0704 00:00:44.412129   59355 main.go:141] libmachine: (no-preload-317739) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0704 00:00:44.412138   59355 main.go:141] libmachine: (no-preload-317739) DBG |     </dhcp>
	I0704 00:00:44.412145   59355 main.go:141] libmachine: (no-preload-317739) DBG |   </ip>
	I0704 00:00:44.412196   59355 main.go:141] libmachine: (no-preload-317739) DBG |   
	I0704 00:00:44.412206   59355 main.go:141] libmachine: (no-preload-317739) DBG | </network>
	I0704 00:00:44.412214   59355 main.go:141] libmachine: (no-preload-317739) DBG | 
	I0704 00:00:44.494456   59355 main.go:141] libmachine: (no-preload-317739) DBG | trying to create private KVM network mk-no-preload-317739 192.168.61.0/24...
	I0704 00:00:44.599971   59355 main.go:141] libmachine: (no-preload-317739) DBG | private KVM network mk-no-preload-317739 192.168.61.0/24 created
	I0704 00:00:44.600155   59355 main.go:141] libmachine: (no-preload-317739) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739 ...
	I0704 00:00:44.600281   59355 main.go:141] libmachine: (no-preload-317739) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0704 00:00:44.600391   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.600343   59453 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:00:44.600664   59355 main.go:141] libmachine: (no-preload-317739) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0704 00:00:44.860144   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.859980   59453 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa...
	I0704 00:00:44.986398   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.986296   59453 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/no-preload-317739.rawdisk...
	I0704 00:00:44.986428   59355 main.go:141] libmachine: (no-preload-317739) DBG | Writing magic tar header
	I0704 00:00:44.986445   59355 main.go:141] libmachine: (no-preload-317739) DBG | Writing SSH key tar header
	I0704 00:00:44.986539   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:44.986470   59453 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739 ...
	I0704 00:00:44.986600   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739
	I0704 00:00:44.986628   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0704 00:00:44.986649   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:00:44.986662   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0704 00:00:44.986678   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739 (perms=drwx------)
	I0704 00:00:44.986692   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0704 00:00:44.986704   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home/jenkins
	I0704 00:00:44.986713   59355 main.go:141] libmachine: (no-preload-317739) DBG | Checking permissions on dir: /home
	I0704 00:00:44.986720   59355 main.go:141] libmachine: (no-preload-317739) DBG | Skipping /home - not owner
	I0704 00:00:44.986734   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0704 00:00:44.986748   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0704 00:00:44.986762   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0704 00:00:44.986786   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0704 00:00:44.986810   59355 main.go:141] libmachine: (no-preload-317739) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0704 00:00:44.986825   59355 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:00:44.988579   59355 main.go:141] libmachine: (no-preload-317739) define libvirt domain using xml: 
	I0704 00:00:44.988606   59355 main.go:141] libmachine: (no-preload-317739) <domain type='kvm'>
	I0704 00:00:44.988618   59355 main.go:141] libmachine: (no-preload-317739)   <name>no-preload-317739</name>
	I0704 00:00:44.988626   59355 main.go:141] libmachine: (no-preload-317739)   <memory unit='MiB'>2200</memory>
	I0704 00:00:44.988635   59355 main.go:141] libmachine: (no-preload-317739)   <vcpu>2</vcpu>
	I0704 00:00:44.988642   59355 main.go:141] libmachine: (no-preload-317739)   <features>
	I0704 00:00:44.988649   59355 main.go:141] libmachine: (no-preload-317739)     <acpi/>
	I0704 00:00:44.988658   59355 main.go:141] libmachine: (no-preload-317739)     <apic/>
	I0704 00:00:44.988665   59355 main.go:141] libmachine: (no-preload-317739)     <pae/>
	I0704 00:00:44.988675   59355 main.go:141] libmachine: (no-preload-317739)     
	I0704 00:00:44.988682   59355 main.go:141] libmachine: (no-preload-317739)   </features>
	I0704 00:00:44.988693   59355 main.go:141] libmachine: (no-preload-317739)   <cpu mode='host-passthrough'>
	I0704 00:00:44.988700   59355 main.go:141] libmachine: (no-preload-317739)   
	I0704 00:00:44.988710   59355 main.go:141] libmachine: (no-preload-317739)   </cpu>
	I0704 00:00:44.988718   59355 main.go:141] libmachine: (no-preload-317739)   <os>
	I0704 00:00:44.988730   59355 main.go:141] libmachine: (no-preload-317739)     <type>hvm</type>
	I0704 00:00:44.988738   59355 main.go:141] libmachine: (no-preload-317739)     <boot dev='cdrom'/>
	I0704 00:00:44.988749   59355 main.go:141] libmachine: (no-preload-317739)     <boot dev='hd'/>
	I0704 00:00:44.988757   59355 main.go:141] libmachine: (no-preload-317739)     <bootmenu enable='no'/>
	I0704 00:00:44.988767   59355 main.go:141] libmachine: (no-preload-317739)   </os>
	I0704 00:00:44.988784   59355 main.go:141] libmachine: (no-preload-317739)   <devices>
	I0704 00:00:44.988795   59355 main.go:141] libmachine: (no-preload-317739)     <disk type='file' device='cdrom'>
	I0704 00:00:44.988809   59355 main.go:141] libmachine: (no-preload-317739)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/boot2docker.iso'/>
	I0704 00:00:44.988823   59355 main.go:141] libmachine: (no-preload-317739)       <target dev='hdc' bus='scsi'/>
	I0704 00:00:44.988830   59355 main.go:141] libmachine: (no-preload-317739)       <readonly/>
	I0704 00:00:44.988838   59355 main.go:141] libmachine: (no-preload-317739)     </disk>
	I0704 00:00:44.988849   59355 main.go:141] libmachine: (no-preload-317739)     <disk type='file' device='disk'>
	I0704 00:00:44.988861   59355 main.go:141] libmachine: (no-preload-317739)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0704 00:00:44.988876   59355 main.go:141] libmachine: (no-preload-317739)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/no-preload-317739.rawdisk'/>
	I0704 00:00:44.988887   59355 main.go:141] libmachine: (no-preload-317739)       <target dev='hda' bus='virtio'/>
	I0704 00:00:44.988895   59355 main.go:141] libmachine: (no-preload-317739)     </disk>
	I0704 00:00:44.988906   59355 main.go:141] libmachine: (no-preload-317739)     <interface type='network'>
	I0704 00:00:44.988917   59355 main.go:141] libmachine: (no-preload-317739)       <source network='mk-no-preload-317739'/>
	I0704 00:00:44.988928   59355 main.go:141] libmachine: (no-preload-317739)       <model type='virtio'/>
	I0704 00:00:44.988935   59355 main.go:141] libmachine: (no-preload-317739)     </interface>
	I0704 00:00:44.988946   59355 main.go:141] libmachine: (no-preload-317739)     <interface type='network'>
	I0704 00:00:44.988958   59355 main.go:141] libmachine: (no-preload-317739)       <source network='default'/>
	I0704 00:00:44.988966   59355 main.go:141] libmachine: (no-preload-317739)       <model type='virtio'/>
	I0704 00:00:44.988975   59355 main.go:141] libmachine: (no-preload-317739)     </interface>
	I0704 00:00:44.988981   59355 main.go:141] libmachine: (no-preload-317739)     <serial type='pty'>
	I0704 00:00:44.988990   59355 main.go:141] libmachine: (no-preload-317739)       <target port='0'/>
	I0704 00:00:44.989000   59355 main.go:141] libmachine: (no-preload-317739)     </serial>
	I0704 00:00:44.989009   59355 main.go:141] libmachine: (no-preload-317739)     <console type='pty'>
	I0704 00:00:44.989021   59355 main.go:141] libmachine: (no-preload-317739)       <target type='serial' port='0'/>
	I0704 00:00:44.989029   59355 main.go:141] libmachine: (no-preload-317739)     </console>
	I0704 00:00:44.989040   59355 main.go:141] libmachine: (no-preload-317739)     <rng model='virtio'>
	I0704 00:00:44.989053   59355 main.go:141] libmachine: (no-preload-317739)       <backend model='random'>/dev/random</backend>
	I0704 00:00:44.989063   59355 main.go:141] libmachine: (no-preload-317739)     </rng>
	I0704 00:00:44.989070   59355 main.go:141] libmachine: (no-preload-317739)     
	I0704 00:00:44.989079   59355 main.go:141] libmachine: (no-preload-317739)     
	I0704 00:00:44.989093   59355 main.go:141] libmachine: (no-preload-317739)   </devices>
	I0704 00:00:44.989101   59355 main.go:141] libmachine: (no-preload-317739) </domain>
	I0704 00:00:44.989112   59355 main.go:141] libmachine: (no-preload-317739) 
	I0704 00:00:45.160897   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:bb:47:b3 in network default
	I0704 00:00:45.161645   59355 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:00:45.161676   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:45.162603   59355 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:00:45.162989   59355 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:00:45.163684   59355 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:00:45.164734   59355 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:00:46.881274   59355 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:00:46.882239   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:46.882748   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:46.882774   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:46.882727   59453 retry.go:31] will retry after 297.478686ms: waiting for machine to come up
	I0704 00:00:47.182457   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:47.183183   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:47.183212   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:47.183108   59453 retry.go:31] will retry after 317.230218ms: waiting for machine to come up
	I0704 00:00:47.501670   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:47.502161   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:47.502187   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:47.502111   59453 retry.go:31] will retry after 484.932319ms: waiting for machine to come up
	I0704 00:00:47.988546   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:47.989157   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:47.989186   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:47.989108   59453 retry.go:31] will retry after 418.33458ms: waiting for machine to come up
	I0704 00:00:48.408646   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:48.409243   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:48.409274   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:48.409210   59453 retry.go:31] will retry after 747.13539ms: waiting for machine to come up
	I0704 00:00:49.157702   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:49.158186   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:49.158222   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:49.158149   59453 retry.go:31] will retry after 898.278699ms: waiting for machine to come up
	I0704 00:00:46.701951   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.773366   58854 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.805198   58854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:46.837452   58854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:00:46.856493   58854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:00:46.871919   58854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:47.187939   58854 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:00:50.058103   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:50.058579   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:50.058617   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:50.058526   59453 retry.go:31] will retry after 747.701679ms: waiting for machine to come up
	I0704 00:00:50.807473   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:50.808014   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:50.808046   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:50.807968   59453 retry.go:31] will retry after 940.048665ms: waiting for machine to come up
	I0704 00:00:51.749677   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:51.750313   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:51.750339   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:51.750265   59453 retry.go:31] will retry after 1.134763585s: waiting for machine to come up
	I0704 00:00:52.886592   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:52.887169   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:52.887201   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:52.887109   59453 retry.go:31] will retry after 2.210752701s: waiting for machine to come up
	I0704 00:00:57.606673   58854 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.418700554s)
	I0704 00:00:57.606704   58854 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:00:57.606755   58854 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:00:57.612469   58854 start.go:562] Will wait 60s for crictl version
	I0704 00:00:57.612553   58854 ssh_runner.go:195] Run: which crictl
	I0704 00:00:57.617388   58854 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:00:57.657333   58854 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:00:57.657402   58854 ssh_runner.go:195] Run: crio --version
	I0704 00:00:57.691674   58854 ssh_runner.go:195] Run: crio --version
	I0704 00:00:57.726274   58854 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:00:55.099080   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:55.099684   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:55.099714   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:55.099635   59453 retry.go:31] will retry after 2.564101077s: waiting for machine to come up
	I0704 00:00:57.665064   59355 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:00:57.665551   59355 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:00:57.665581   59355 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:00:57.665483   59453 retry.go:31] will retry after 2.558774546s: waiting for machine to come up
	I0704 00:00:57.727947   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .GetIP
	I0704 00:00:57.731514   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:57.732037   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:75:17", ip: ""} in network mk-kubernetes-upgrade-652205: {Iface:virbr1 ExpiryTime:2024-07-04 00:59:35 +0000 UTC Type:0 Mac:52:54:00:32:75:17 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:kubernetes-upgrade-652205 Clientid:01:52:54:00:32:75:17}
	I0704 00:00:57.732061   58854 main.go:141] libmachine: (kubernetes-upgrade-652205) DBG | domain kubernetes-upgrade-652205 has defined IP address 192.168.39.204 and MAC address 52:54:00:32:75:17 in network mk-kubernetes-upgrade-652205
	I0704 00:00:57.732359   58854 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:00:57.737121   58854 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:00:57.737228   58854 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:00:57.737273   58854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:57.786054   58854 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:00:57.786081   58854 crio.go:433] Images already preloaded, skipping extraction
	I0704 00:00:57.786137   58854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:57.825623   58854 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:00:57.825646   58854 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:00:57.825656   58854 kubeadm.go:928] updating node { 192.168.39.204 8443 v1.30.2 crio true true} ...
	I0704 00:00:57.825766   58854 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-652205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:00:57.825836   58854 ssh_runner.go:195] Run: crio config
	I0704 00:00:57.880377   58854 cni.go:84] Creating CNI manager for ""
	I0704 00:00:57.880401   58854 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:00:57.880414   58854 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:00:57.880433   58854 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-652205 NodeName:kubernetes-upgrade-652205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:00:57.880560   58854 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-652205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:00:57.880614   58854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:00:57.891052   58854 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:00:57.891152   58854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:00:57.901724   58854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0704 00:00:57.921082   58854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:00:57.940279   58854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0704 00:00:57.958757   58854 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I0704 00:00:57.963340   58854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:58.121682   58854 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:00:58.137579   58854 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205 for IP: 192.168.39.204
	I0704 00:00:58.137605   58854 certs.go:194] generating shared ca certs ...
	I0704 00:00:58.137637   58854 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:58.137791   58854 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:00:58.137849   58854 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:00:58.137863   58854 certs.go:256] generating profile certs ...
	I0704 00:00:58.137961   58854 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/client.key
	I0704 00:00:58.138011   58854 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key.fbd595f4
	I0704 00:00:58.138042   58854 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key
	I0704 00:00:58.138143   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:00:58.138173   58854 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:00:58.138180   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:00:58.138213   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:00:58.138237   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:00:58.138259   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:00:58.138299   58854 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:58.138861   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:00:58.169405   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:00:58.200812   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:00:58.234763   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:00:58.265157   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0704 00:00:58.292384   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:00:58.319281   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:00:58.347924   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:00:58.377617   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:00:58.406546   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:00:58.435563   58854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:00:58.464895   58854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:00:58.483483   58854 ssh_runner.go:195] Run: openssl version
	I0704 00:00:58.489893   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:00:58.502539   58854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:58.507269   58854 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:58.507329   58854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:58.513447   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:00:58.524693   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:00:58.536817   58854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:00:58.541742   58854 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:00:58.541798   58854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:00:58.548673   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:00:58.559990   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:00:58.572248   58854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:00:58.577476   58854 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:00:58.577554   58854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:00:58.583764   58854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:00:58.594583   58854 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:00:58.601268   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:00:58.607823   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:00:58.614242   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:00:58.620615   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:00:58.627140   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:00:58.633499   58854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:00:58.640019   58854 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:00:58.640124   58854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:00:58.640180   58854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:00:58.685815   58854 cri.go:89] found id: "7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09"
	I0704 00:00:58.685835   58854 cri.go:89] found id: "c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1"
	I0704 00:00:58.685839   58854 cri.go:89] found id: "f82ee8b3f78d0791cc424ae4e8ad78d4269e3927b4f72c887c2979afec53a4a4"
	I0704 00:00:58.685842   58854 cri.go:89] found id: "4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73"
	I0704 00:00:58.685855   58854 cri.go:89] found id: "d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954"
	I0704 00:00:58.685858   58854 cri.go:89] found id: "aff85418da6cb0b5b2b9aa0532ae973100f9e1112f9624ccdd1ca7e0e59e7c42"
	I0704 00:00:58.685861   58854 cri.go:89] found id: "26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0"
	I0704 00:00:58.685863   58854 cri.go:89] found id: "4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c"
	I0704 00:00:58.685865   58854 cri.go:89] found id: "c197c10d4448ef75a40ed1fd15a1f12ed688acb08bf76ab5751d256e421e40e7"
	I0704 00:00:58.685870   58854 cri.go:89] found id: "faed07d4344b55bdc366ddbb8460a6cc01c9b55e6a2aa18c245c82662aaab1b8"
	I0704 00:00:58.685873   58854 cri.go:89] found id: "13305e1aed8297cbc3ce336334115ea2402154e354f3263f1be76bc5c3b0abf1"
	I0704 00:00:58.685875   58854 cri.go:89] found id: "e045043ab9ee2e0a2ddac3a7b76ffbf25dcb53d296ce850685761c693cde659c"
	I0704 00:00:58.685878   58854 cri.go:89] found id: "538f3225b584f25a060bc8705e3076085e5813949e5238b84d1571fb731a2c2d"
	I0704 00:00:58.685880   58854 cri.go:89] found id: "dff01d9c3505b33e687a291b2c9de426f1053278f13b28d113888bcad640c0fe"
	I0704 00:00:58.685893   58854 cri.go:89] found id: "802346a9c98dcbc2d527159ca3f5495638da4ffe68db9a007143f9c177b1d46f"
	I0704 00:00:58.685896   58854 cri.go:89] found id: "011802d789286960c0d8a48fbd7452ed1b94bceb46c73502c3f4e27d123fa2b4"
	I0704 00:00:58.685899   58854 cri.go:89] found id: ""
	I0704 00:00:58.685938   58854 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.294104087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051289294072787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c277ee1-a1ab-495e-945e-545ff8d698c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.294957714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cc086c7-ce9c-41b6-a427-ff811771b69a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.295078280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cc086c7-ce9c-41b6-a427-ff811771b69a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.295664801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b8ed97d500f39c233356f10b0cc64f5152e9290a41d51d188660836b7f5c60,PodSandboxId:36eff949748bffedc6ab05735563669901a32587b3dcf3a4c3dfa69fa1e4159f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285995140775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfe1ab4de78053706f70ef55a3d4d696e861889695c3db50eb38efca05c1d45,PodSandboxId:d301a044c1be28f6181ca0dfbcdea87fb3bce1b115f3acb0826fda2ea9589f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285953494359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505fe6a52ae0758a5aeff37e5afc0f3297a5bbbb91f56b79327644ea45ce20c1,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1720051285944150338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8840ce19b02c813461ce3809d31b9759ccc4e4e299d7bd3469d108cf5eec3e,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1720051281910955359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40019309fcdf19e69feec906dec40fd1be37fc86bc8c6357d7f219efebb1678a,PodSandboxId:243dc774cb83102f6476e3f8abb0586072f9e44b8dc9d2ccfebdbddc23273633,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Created
At:1720051259626223561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc002d7ef0cc9f9e9949f484a2777786cd5cef8ca2c97a9d3d0315dcf6c6c97c,PodSandboxId:89f13b065a60d9fbc4ea567041d2d5e0c9eaf9e76473a276ffe3d5bfccdcddd8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051259545559979,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51c3223be392892293d7a689c3d05cf4942b81eb40237a152952bae62e878c7,PodSandboxId:2300f97d2a0d24d8a267a9e3d903c40c93ea738ca1ba237fe4f2bc3fd27ecfe9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051259570910510,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_CREATED,CreatedAt:17200
51259432292933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9492bf57cba2dce1760530c94a7c0aea6b27f548ec21ebf395953382c97b03,PodSandboxId:d111346045cfe81d0911276c6bc2b8bb3faefae275bda7f5d4c51447667e16ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051259454192691,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051259320675467,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09,PodSandboxId:891dce387dfcbafdb563fc07ca0986c845d671214c1179b4de0063060f3e8194,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246291541778,Labels:map[string]string{io.kubernetes.container.nam
e: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1,PodSandboxId:6de08445e49c249fb49ee9fd64deddd377373273a8eecd51ece6f915a79a1fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246182536655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954,PodSandboxId:1f009f67688df39c053c9f07ae9de38185c1ab8f42cafe404eb77d6f80841906,Me
tadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051245841630069,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73,PodSandboxId:6a251fa2f0bcd70e09d755a6817b977a557242b88d2f20a227e5643d383c28ad,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051245853729163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0,PodSandboxId:86d0d7b5cd0e822c7005f3ae1cef30dffc08458c877d2de4f03b42eba5bc9f21,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051245697816966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c,PodSandboxId:1484b87c9c612c10b968e13e9580e7543f762a7fa52e88b8a5ecff3a17e9d715,Metadata:&ContainerMet
adata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051245243773197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cc086c7-ce9c-41b6-a427-ff811771b69a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.377532944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d8344cc-e31c-460d-9df1-c821ba22e724 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.377650921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d8344cc-e31c-460d-9df1-c821ba22e724 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.379609671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bf600ef-1a1f-4669-8577-67b132cd75c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.380759031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051289379993636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bf600ef-1a1f-4669-8577-67b132cd75c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.381773058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30b64c3b-ce5b-4909-bcbb-b566ef38fa48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.381856834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30b64c3b-ce5b-4909-bcbb-b566ef38fa48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.382223136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b8ed97d500f39c233356f10b0cc64f5152e9290a41d51d188660836b7f5c60,PodSandboxId:36eff949748bffedc6ab05735563669901a32587b3dcf3a4c3dfa69fa1e4159f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285995140775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfe1ab4de78053706f70ef55a3d4d696e861889695c3db50eb38efca05c1d45,PodSandboxId:d301a044c1be28f6181ca0dfbcdea87fb3bce1b115f3acb0826fda2ea9589f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285953494359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505fe6a52ae0758a5aeff37e5afc0f3297a5bbbb91f56b79327644ea45ce20c1,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1720051285944150338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8840ce19b02c813461ce3809d31b9759ccc4e4e299d7bd3469d108cf5eec3e,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1720051281910955359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40019309fcdf19e69feec906dec40fd1be37fc86bc8c6357d7f219efebb1678a,PodSandboxId:243dc774cb83102f6476e3f8abb0586072f9e44b8dc9d2ccfebdbddc23273633,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Created
At:1720051259626223561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc002d7ef0cc9f9e9949f484a2777786cd5cef8ca2c97a9d3d0315dcf6c6c97c,PodSandboxId:89f13b065a60d9fbc4ea567041d2d5e0c9eaf9e76473a276ffe3d5bfccdcddd8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051259545559979,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51c3223be392892293d7a689c3d05cf4942b81eb40237a152952bae62e878c7,PodSandboxId:2300f97d2a0d24d8a267a9e3d903c40c93ea738ca1ba237fe4f2bc3fd27ecfe9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051259570910510,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_CREATED,CreatedAt:17200
51259432292933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9492bf57cba2dce1760530c94a7c0aea6b27f548ec21ebf395953382c97b03,PodSandboxId:d111346045cfe81d0911276c6bc2b8bb3faefae275bda7f5d4c51447667e16ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051259454192691,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051259320675467,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09,PodSandboxId:891dce387dfcbafdb563fc07ca0986c845d671214c1179b4de0063060f3e8194,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246291541778,Labels:map[string]string{io.kubernetes.container.nam
e: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1,PodSandboxId:6de08445e49c249fb49ee9fd64deddd377373273a8eecd51ece6f915a79a1fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246182536655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954,PodSandboxId:1f009f67688df39c053c9f07ae9de38185c1ab8f42cafe404eb77d6f80841906,Me
tadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051245841630069,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73,PodSandboxId:6a251fa2f0bcd70e09d755a6817b977a557242b88d2f20a227e5643d383c28ad,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051245853729163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0,PodSandboxId:86d0d7b5cd0e822c7005f3ae1cef30dffc08458c877d2de4f03b42eba5bc9f21,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051245697816966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c,PodSandboxId:1484b87c9c612c10b968e13e9580e7543f762a7fa52e88b8a5ecff3a17e9d715,Metadata:&ContainerMet
adata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051245243773197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30b64c3b-ce5b-4909-bcbb-b566ef38fa48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.454774983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c449485-882d-4468-a2c3-283a14b269a4 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.454889553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c449485-882d-4468-a2c3-283a14b269a4 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.457185450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e527db6c-d009-4331-a39d-625cb745c471 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.457850840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051289457808338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e527db6c-d009-4331-a39d-625cb745c471 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.459013159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f955bf4-505f-4f6b-b104-12141186fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.459101476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f955bf4-505f-4f6b-b104-12141186fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.459736536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b8ed97d500f39c233356f10b0cc64f5152e9290a41d51d188660836b7f5c60,PodSandboxId:36eff949748bffedc6ab05735563669901a32587b3dcf3a4c3dfa69fa1e4159f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285995140775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfe1ab4de78053706f70ef55a3d4d696e861889695c3db50eb38efca05c1d45,PodSandboxId:d301a044c1be28f6181ca0dfbcdea87fb3bce1b115f3acb0826fda2ea9589f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285953494359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505fe6a52ae0758a5aeff37e5afc0f3297a5bbbb91f56b79327644ea45ce20c1,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1720051285944150338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8840ce19b02c813461ce3809d31b9759ccc4e4e299d7bd3469d108cf5eec3e,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1720051281910955359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40019309fcdf19e69feec906dec40fd1be37fc86bc8c6357d7f219efebb1678a,PodSandboxId:243dc774cb83102f6476e3f8abb0586072f9e44b8dc9d2ccfebdbddc23273633,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Created
At:1720051259626223561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc002d7ef0cc9f9e9949f484a2777786cd5cef8ca2c97a9d3d0315dcf6c6c97c,PodSandboxId:89f13b065a60d9fbc4ea567041d2d5e0c9eaf9e76473a276ffe3d5bfccdcddd8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051259545559979,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51c3223be392892293d7a689c3d05cf4942b81eb40237a152952bae62e878c7,PodSandboxId:2300f97d2a0d24d8a267a9e3d903c40c93ea738ca1ba237fe4f2bc3fd27ecfe9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051259570910510,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_CREATED,CreatedAt:17200
51259432292933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9492bf57cba2dce1760530c94a7c0aea6b27f548ec21ebf395953382c97b03,PodSandboxId:d111346045cfe81d0911276c6bc2b8bb3faefae275bda7f5d4c51447667e16ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051259454192691,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051259320675467,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09,PodSandboxId:891dce387dfcbafdb563fc07ca0986c845d671214c1179b4de0063060f3e8194,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246291541778,Labels:map[string]string{io.kubernetes.container.nam
e: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1,PodSandboxId:6de08445e49c249fb49ee9fd64deddd377373273a8eecd51ece6f915a79a1fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246182536655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954,PodSandboxId:1f009f67688df39c053c9f07ae9de38185c1ab8f42cafe404eb77d6f80841906,Me
tadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051245841630069,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73,PodSandboxId:6a251fa2f0bcd70e09d755a6817b977a557242b88d2f20a227e5643d383c28ad,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051245853729163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0,PodSandboxId:86d0d7b5cd0e822c7005f3ae1cef30dffc08458c877d2de4f03b42eba5bc9f21,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051245697816966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c,PodSandboxId:1484b87c9c612c10b968e13e9580e7543f762a7fa52e88b8a5ecff3a17e9d715,Metadata:&ContainerMet
adata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051245243773197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f955bf4-505f-4f6b-b104-12141186fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.514208752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f63cfbf-9a73-4f96-b85a-202010a40b77 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.514312722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f63cfbf-9a73-4f96-b85a-202010a40b77 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.515612658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78b6e849-75aa-41f9-9e33-124c5514896c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.516028737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051289515996057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78b6e849-75aa-41f9-9e33-124c5514896c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.516814685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a87895b-3f7c-40ce-bad6-d998e53cd2a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.516871926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a87895b-3f7c-40ce-bad6-d998e53cd2a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:01:29 kubernetes-upgrade-652205 crio[3035]: time="2024-07-04 00:01:29.517235002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16b8ed97d500f39c233356f10b0cc64f5152e9290a41d51d188660836b7f5c60,PodSandboxId:36eff949748bffedc6ab05735563669901a32587b3dcf3a4c3dfa69fa1e4159f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285995140775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfe1ab4de78053706f70ef55a3d4d696e861889695c3db50eb38efca05c1d45,PodSandboxId:d301a044c1be28f6181ca0dfbcdea87fb3bce1b115f3acb0826fda2ea9589f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051285953494359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505fe6a52ae0758a5aeff37e5afc0f3297a5bbbb91f56b79327644ea45ce20c1,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1720051285944150338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8840ce19b02c813461ce3809d31b9759ccc4e4e299d7bd3469d108cf5eec3e,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1720051281910955359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40019309fcdf19e69feec906dec40fd1be37fc86bc8c6357d7f219efebb1678a,PodSandboxId:243dc774cb83102f6476e3f8abb0586072f9e44b8dc9d2ccfebdbddc23273633,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Created
At:1720051259626223561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc002d7ef0cc9f9e9949f484a2777786cd5cef8ca2c97a9d3d0315dcf6c6c97c,PodSandboxId:89f13b065a60d9fbc4ea567041d2d5e0c9eaf9e76473a276ffe3d5bfccdcddd8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051259545559979,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51c3223be392892293d7a689c3d05cf4942b81eb40237a152952bae62e878c7,PodSandboxId:2300f97d2a0d24d8a267a9e3d903c40c93ea738ca1ba237fe4f2bc3fd27ecfe9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051259570910510,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642,PodSandboxId:e2e7479d15b73550eb34b9b555cab856abd08bcf9c6fdfcdba5f39941a41ac8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_CREATED,CreatedAt:17200
51259432292933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04c1b0c-2e48-4714-9f49-df1bfc40b986,},Annotations:map[string]string{io.kubernetes.container.hash: b028982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9492bf57cba2dce1760530c94a7c0aea6b27f548ec21ebf395953382c97b03,PodSandboxId:d111346045cfe81d0911276c6bc2b8bb3faefae275bda7f5d4c51447667e16ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051259454192691,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48,PodSandboxId:f321d9b124ec99e01ae93a5dae89974f58ed883cc2114c4f38576af66682654b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051259320675467,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e804c1969b3d4bf56fb72bef3af583cb,},Annotations:map[string]string{io.kubernetes.container.hash: e58eefda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09,PodSandboxId:891dce387dfcbafdb563fc07ca0986c845d671214c1179b4de0063060f3e8194,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246291541778,Labels:map[string]string{io.kubernetes.container.nam
e: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kb2ks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6052a7ef-a82d-488e-a5bc-14c679e1392f,},Annotations:map[string]string{io.kubernetes.container.hash: 864cf1b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1,PodSandboxId:6de08445e49c249fb49ee9fd64deddd377373273a8eecd51ece6f915a79a1fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051246182536655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbbdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 17f03fcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954,PodSandboxId:1f009f67688df39c053c9f07ae9de38185c1ab8f42cafe404eb77d6f80841906,Me
tadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051245841630069,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5p26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 846287a7-b7b1-467f-9197-a761d4ad5bab,},Annotations:map[string]string{io.kubernetes.container.hash: edc4d22e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73,PodSandboxId:6a251fa2f0bcd70e09d755a6817b977a557242b88d2f20a227e5643d383c28ad,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051245853729163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0dcbaee876935ec5275e89cb0b088f,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0,PodSandboxId:86d0d7b5cd0e822c7005f3ae1cef30dffc08458c877d2de4f03b42eba5bc9f21,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051245697816966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db5c4d8235c3b4ca982b045c458b560,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c,PodSandboxId:1484b87c9c612c10b968e13e9580e7543f762a7fa52e88b8a5ecff3a17e9d715,Metadata:&ContainerMet
adata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051245243773197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-652205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004ad37d3384932f3db83d411e94dfeb,},Annotations:map[string]string{io.kubernetes.container.hash: 41d99f05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a87895b-3f7c-40ce-bad6-d998e53cd2a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16b8ed97d500f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   36eff949748bf       coredns-7db6d8ff4d-fbbdl
	9bfe1ab4de780       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   d301a044c1be2       coredns-7db6d8ff4d-kb2ks
	505fe6a52ae07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   e2e7479d15b73       storage-provisioner
	4d8840ce19b02       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   7 seconds ago       Running             kube-apiserver            3                   f321d9b124ec9       kube-apiserver-kubernetes-upgrade-652205
	40019309fcdf1       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   30 seconds ago      Running             kube-proxy                2                   243dc774cb831       kube-proxy-l5p26
	a51c3223be392       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   30 seconds ago      Running             kube-controller-manager   2                   2300f97d2a0d2       kube-controller-manager-kubernetes-upgrade-652205
	cc002d7ef0cc9       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   30 seconds ago      Running             kube-scheduler            2                   89f13b065a60d       kube-scheduler-kubernetes-upgrade-652205
	7d9492bf57cba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Running             etcd                      2                   d111346045cfe       etcd-kubernetes-upgrade-652205
	5bef5853b5a4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   30 seconds ago      Created             storage-provisioner       2                   e2e7479d15b73       storage-provisioner
	395b09738378d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   30 seconds ago      Exited              kube-apiserver            2                   f321d9b124ec9       kube-apiserver-kubernetes-upgrade-652205
	7d8f3d7d54000       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   891dce387dfcb       coredns-7db6d8ff4d-kb2ks
	c1086a8e9122e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   6de08445e49c2       coredns-7db6d8ff4d-fbbdl
	4f36b4983dba2       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   43 seconds ago      Exited              kube-scheduler            1                   6a251fa2f0bcd       kube-scheduler-kubernetes-upgrade-652205
	d2981bbac0ec4       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   43 seconds ago      Exited              kube-proxy                1                   1f009f67688df       kube-proxy-l5p26
	26e0be69c5321       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   43 seconds ago      Exited              kube-controller-manager   1                   86d0d7b5cd0e8       kube-controller-manager-kubernetes-upgrade-652205
	4073d5ebd4193       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   44 seconds ago      Exited              etcd                      1                   1484b87c9c612       etcd-kubernetes-upgrade-652205
	
	
	==> coredns [16b8ed97d500f39c233356f10b0cc64f5152e9290a41d51d188660836b7f5c60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7d8f3d7d540002f85ee073f317da207f9eb22230261e5f0f56f683cdc5fdbf09] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9bfe1ab4de78053706f70ef55a3d4d696e861889695c3db50eb38efca05c1d45] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c1086a8e9122e598951fdb62ac7afb5cf2d26a5a96bf3d4a09abf4e0a7fd3ad1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-652205
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-652205
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-652205
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:01:25 +0000   Wed, 03 Jul 2024 23:59:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:01:25 +0000   Wed, 03 Jul 2024 23:59:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:01:25 +0000   Wed, 03 Jul 2024 23:59:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:01:25 +0000   Wed, 03 Jul 2024 23:59:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    kubernetes-upgrade-652205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 233730b2c7e54962a8d63385cf8080fc
	  System UUID:                233730b2-c7e5-4962-a8d6-3385cf8080fc
	  Boot ID:                    0af85acf-7f0d-4887-8997-77e90f011b13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fbbdl                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 coredns-7db6d8ff4d-kb2ks                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 etcd-kubernetes-upgrade-652205                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kube-apiserver-kubernetes-upgrade-652205             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-652205    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-l5p26                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-kubernetes-upgrade-652205             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node kubernetes-upgrade-652205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node kubernetes-upgrade-652205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node kubernetes-upgrade-652205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           78s                node-controller  Node kubernetes-upgrade-652205 event: Registered Node kubernetes-upgrade-652205 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.216787] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.071794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087085] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.202972] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.140563] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.329433] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +4.582027] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +0.066663] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.978986] systemd-fstab-generator[861]: Ignoring "noauto" option for root device
	[  +8.932390] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.080703] kauditd_printk_skb: 97 callbacks suppressed
	[Jul 4 00:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +31.086093] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.311575] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.252100] systemd-fstab-generator[2301]: Ignoring "noauto" option for root device
	[  +0.536151] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +0.245206] systemd-fstab-generator[2655]: Ignoring "noauto" option for root device
	[  +1.160988] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[ +11.019356] systemd-fstab-generator[3344]: Ignoring "noauto" option for root device
	[  +0.099901] kauditd_printk_skb: 206 callbacks suppressed
	[Jul 4 00:01] systemd-fstab-generator[4101]: Ignoring "noauto" option for root device
	[ +19.522033] kauditd_printk_skb: 148 callbacks suppressed
	[  +6.396695] systemd-fstab-generator[4546]: Ignoring "noauto" option for root device
	
	
	==> etcd [4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c] <==
	{"level":"info","ts":"2024-07-04T00:00:45.813288Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"13.826775ms"}
	{"level":"info","ts":"2024-07-04T00:00:45.841188Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-04T00:00:45.884948Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","commit-index":414}
	{"level":"info","ts":"2024-07-04T00:00:45.885107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-04T00:00:45.885161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became follower at term 2"}
	{"level":"info","ts":"2024-07-04T00:00:45.885179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7dd4abf80c2dae76 [peers: [], term: 2, commit: 414, applied: 0, lastindex: 414, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-04T00:00:45.899533Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-04T00:00:45.946603Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":400}
	{"level":"info","ts":"2024-07-04T00:00:46.008909Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-04T00:00:46.021686Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7dd4abf80c2dae76","timeout":"7s"}
	{"level":"info","ts":"2024-07-04T00:00:46.021974Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7dd4abf80c2dae76"}
	{"level":"info","ts":"2024-07-04T00:00:46.022027Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"7dd4abf80c2dae76","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-04T00:00:46.026681Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-04T00:00:46.026843Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-04T00:00:46.026878Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-04T00:00:46.026887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-04T00:00:46.027159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 switched to configuration voters=(9067061031648210550)"}
	{"level":"info","ts":"2024-07-04T00:00:46.027216Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","added-peer-id":"7dd4abf80c2dae76","added-peer-peer-urls":["https://192.168.39.204:2380"]}
	{"level":"info","ts":"2024-07-04T00:00:46.027304Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:00:46.027327Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:00:46.041216Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-04T00:00:46.041511Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dd4abf80c2dae76","initial-advertise-peer-urls":["https://192.168.39.204:2380"],"listen-peer-urls":["https://192.168.39.204:2380"],"advertise-client-urls":["https://192.168.39.204:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.204:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-04T00:00:46.041565Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-04T00:00:46.041691Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.204:2380"}
	{"level":"info","ts":"2024-07-04T00:00:46.041717Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.204:2380"}
	
	
	==> etcd [7d9492bf57cba2dce1760530c94a7c0aea6b27f548ec21ebf395953382c97b03] <==
	{"level":"info","ts":"2024-07-04T00:01:21.940953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:01:21.940983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:01:21.953431Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-04T00:01:21.953512Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.204:2380"}
	{"level":"info","ts":"2024-07-04T00:01:21.954178Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.204:2380"}
	{"level":"info","ts":"2024-07-04T00:01:21.962731Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-04T00:01:21.962671Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dd4abf80c2dae76","initial-advertise-peer-urls":["https://192.168.39.204:2380"],"listen-peer-urls":["https://192.168.39.204:2380"],"advertise-client-urls":["https://192.168.39.204:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.204:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-04T00:01:23.545555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-04T00:01:23.54564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-04T00:01:23.545684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 received MsgPreVoteResp from 7dd4abf80c2dae76 at term 2"}
	{"level":"info","ts":"2024-07-04T00:01:23.545705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became candidate at term 3"}
	{"level":"info","ts":"2024-07-04T00:01:23.545713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 received MsgVoteResp from 7dd4abf80c2dae76 at term 3"}
	{"level":"info","ts":"2024-07-04T00:01:23.545725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became leader at term 3"}
	{"level":"info","ts":"2024-07-04T00:01:23.545735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dd4abf80c2dae76 elected leader 7dd4abf80c2dae76 at term 3"}
	{"level":"info","ts":"2024-07-04T00:01:23.631775Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7dd4abf80c2dae76","local-member-attributes":"{Name:kubernetes-upgrade-652205 ClientURLs:[https://192.168.39.204:2379]}","request-path":"/0/members/7dd4abf80c2dae76/attributes","cluster-id":"ae97a28c245b4e6c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-04T00:01:23.631852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:01:23.632461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:01:23.632763Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-04T00:01:23.632893Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-04T00:01:23.634941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-04T00:01:23.636927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.204:2379"}
	{"level":"info","ts":"2024-07-04T00:01:26.353655Z","caller":"traceutil/trace.go:171","msg":"trace[1482917529] linearizableReadLoop","detail":"{readStateIndex:429; appliedIndex:428; }","duration":"167.876407ms","start":"2024-07-04T00:01:26.185761Z","end":"2024-07-04T00:01:26.353637Z","steps":["trace[1482917529] 'read index received'  (duration: 166.761011ms)","trace[1482917529] 'applied index is now lower than readState.Index'  (duration: 1.114574ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-04T00:01:26.35389Z","caller":"traceutil/trace.go:171","msg":"trace[1796849196] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"170.529379ms","start":"2024-07-04T00:01:26.183337Z","end":"2024-07-04T00:01:26.353866Z","steps":["trace[1796849196] 'process raft request'  (duration: 169.227828ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:01:26.354016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.231765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient\" ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2024-07-04T00:01:26.35412Z","caller":"traceutil/trace.go:171","msg":"trace[489187604] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient; range_end:; response_count:1; response_revision:412; }","duration":"168.363894ms","start":"2024-07-04T00:01:26.185739Z","end":"2024-07-04T00:01:26.354103Z","steps":["trace[489187604] 'agreement among raft nodes before linearized reading'  (duration: 168.133062ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:01:30 up 2 min,  0 users,  load average: 1.43, 0.54, 0.20
	Linux kubernetes-upgrade-652205 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48] <==
	I0704 00:01:00.072161       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0704 00:01:00.560569       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0704 00:01:00.561056       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0704 00:01:00.568332       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0704 00:01:00.568424       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0704 00:01:00.568616       1 instance.go:299] Using reconciler: lease
	I0704 00:01:00.569114       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0704 00:01:00.569475       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:00.570062       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:01.561617       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:01.570216       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:01.570544       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:02.875228       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:03.175911       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:03.361048       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:05.203245       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:05.307648       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:06.351810       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:09.231823       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:10.127783       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:10.352029       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:14.973153       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:15.917767       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:01:17.752543       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0704 00:01:20.569823       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [4d8840ce19b02c813461ce3809d31b9759ccc4e4e299d7bd3469d108cf5eec3e] <==
	I0704 00:01:24.998463       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0704 00:01:24.998651       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0704 00:01:25.082158       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0704 00:01:25.082657       1 aggregator.go:165] initial CRD sync complete...
	I0704 00:01:25.082690       1 autoregister_controller.go:141] Starting autoregister controller
	I0704 00:01:25.082696       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0704 00:01:25.082703       1 cache.go:39] Caches are synced for autoregister controller
	I0704 00:01:25.126947       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0704 00:01:25.156282       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0704 00:01:25.156420       1 policy_source.go:224] refreshing policies
	I0704 00:01:25.157695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0704 00:01:25.160823       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0704 00:01:25.160913       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0704 00:01:25.161765       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0704 00:01:25.162723       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0704 00:01:25.163066       1 shared_informer.go:320] Caches are synced for configmaps
	I0704 00:01:25.163389       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0704 00:01:25.218762       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0704 00:01:25.966883       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0704 00:01:26.169863       1 controller.go:615] quota admission added evaluator for: endpoints
	I0704 00:01:26.911036       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0704 00:01:26.926129       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0704 00:01:26.992043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0704 00:01:27.092155       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0704 00:01:27.114973       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0] <==
	
	
	==> kube-controller-manager [a51c3223be392892293d7a689c3d05cf4942b81eb40237a152952bae62e878c7] <==
	I0704 00:01:27.729589       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0704 00:01:27.729610       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0704 00:01:27.729648       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0704 00:01:27.729685       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0704 00:01:27.729710       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0704 00:01:27.729726       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0704 00:01:27.729750       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0704 00:01:27.729765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0704 00:01:27.729944       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0704 00:01:27.729970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0704 00:01:27.729984       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0704 00:01:27.729997       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0704 00:01:27.730030       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0704 00:01:27.730051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0704 00:01:27.730076       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0704 00:01:27.730106       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0704 00:01:27.730121       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0704 00:01:27.730153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0704 00:01:27.730203       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0704 00:01:27.730286       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0704 00:01:27.730297       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0704 00:01:27.730313       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0704 00:01:27.772786       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0704 00:01:27.772939       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0704 00:01:27.772949       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	
	
	==> kube-proxy [40019309fcdf19e69feec906dec40fd1be37fc86bc8c6357d7f219efebb1678a] <==
	I0704 00:01:26.237861       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:01:26.380550       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.204"]
	I0704 00:01:26.494284       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:01:26.494398       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:01:26.494417       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:01:26.500800       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:01:26.501118       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:01:26.501149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:01:26.505195       1 config.go:192] "Starting service config controller"
	I0704 00:01:26.506277       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:01:26.506464       1 config.go:319] "Starting node config controller"
	I0704 00:01:26.506489       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:01:26.508687       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:01:26.508720       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:01:26.607546       1 shared_informer.go:320] Caches are synced for node config
	I0704 00:01:26.607604       1 shared_informer.go:320] Caches are synced for service config
	I0704 00:01:26.608831       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954] <==
	
	
	==> kube-scheduler [4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73] <==
	
	
	==> kube-scheduler [cc002d7ef0cc9f9e9949f484a2777786cd5cef8ca2c97a9d3d0315dcf6c6c97c] <==
	I0704 00:01:22.553795       1 serving.go:380] Generated self-signed cert in-memory
	W0704 00:01:25.009951       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0704 00:01:25.010053       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0704 00:01:25.010085       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0704 00:01:25.010112       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0704 00:01:25.061558       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0704 00:01:25.061672       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:01:25.065971       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0704 00:01:25.067580       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0704 00:01:25.069442       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0704 00:01:25.075056       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0704 00:01:25.182123       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: E0704 00:01:21.577683    4108 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.204:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.204:35670->192.168.39.204:8443: read: connection reset by peer
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:21.724522    4108 scope.go:117] "RemoveContainer" containerID="26e0be69c532149bda9586f06f71a46697bd43549fef3bee88e905fb7bcab1b0"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:21.726125    4108 scope.go:117] "RemoveContainer" containerID="4f36b4983dba20de3939cb9bfc54865cbafc0e8e165519862e5753a66b11ea73"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:21.727762    4108 scope.go:117] "RemoveContainer" containerID="4073d5ebd41932e00365e3c64012dac0de7ae9c2132f7033cd9833808fc45d1c"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: E0704 00:01:21.740923    4108 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-652205\" not found"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:21.779156    4108 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-652205"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: E0704 00:01:21.784318    4108 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.204:8443: connect: connection refused" node="kubernetes-upgrade-652205"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:21.879655    4108 scope.go:117] "RemoveContainer" containerID="395b09738378d7272eb792107d79243d3d1254dabb93bb71488d9aebf459dd48"
	Jul 04 00:01:21 kubernetes-upgrade-652205 kubelet[4108]: E0704 00:01:21.978505    4108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-652205?timeout=10s\": dial tcp 192.168.39.204:8443: connect: connection refused" interval="800ms"
	Jul 04 00:01:23 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:23.386593    4108 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-652205"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.217300    4108 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-652205"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.217494    4108 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-652205"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.219748    4108 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.221305    4108 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.605792    4108 apiserver.go:52] "Watching apiserver"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.609837    4108 topology_manager.go:215] "Topology Admit Handler" podUID="c04c1b0c-2e48-4714-9f49-df1bfc40b986" podNamespace="kube-system" podName="storage-provisioner"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.609996    4108 topology_manager.go:215] "Topology Admit Handler" podUID="846287a7-b7b1-467f-9197-a761d4ad5bab" podNamespace="kube-system" podName="kube-proxy-l5p26"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.610065    4108 topology_manager.go:215] "Topology Admit Handler" podUID="4249e20c-0b2d-4c51-aff7-9fdbaa83a1e9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fbbdl"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.610170    4108 topology_manager.go:215] "Topology Admit Handler" podUID="6052a7ef-a82d-488e-a5bc-14c679e1392f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kb2ks"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.615022    4108 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.635426    4108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c04c1b0c-2e48-4714-9f49-df1bfc40b986-tmp\") pod \"storage-provisioner\" (UID: \"c04c1b0c-2e48-4714-9f49-df1bfc40b986\") " pod="kube-system/storage-provisioner"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.635507    4108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/846287a7-b7b1-467f-9197-a761d4ad5bab-lib-modules\") pod \"kube-proxy-l5p26\" (UID: \"846287a7-b7b1-467f-9197-a761d4ad5bab\") " pod="kube-system/kube-proxy-l5p26"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.635548    4108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/846287a7-b7b1-467f-9197-a761d4ad5bab-xtables-lock\") pod \"kube-proxy-l5p26\" (UID: \"846287a7-b7b1-467f-9197-a761d4ad5bab\") " pod="kube-system/kube-proxy-l5p26"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.913100    4108 scope.go:117] "RemoveContainer" containerID="5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642"
	Jul 04 00:01:25 kubernetes-upgrade-652205 kubelet[4108]: I0704 00:01:25.914532    4108 scope.go:117] "RemoveContainer" containerID="d2981bbac0ec41211816589af8eed23fe91a170483421092f676c42fbe43d954"
	
	
	==> storage-provisioner [505fe6a52ae0758a5aeff37e5afc0f3297a5bbbb91f56b79327644ea45ce20c1] <==
	I0704 00:01:26.137102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:01:26.155267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:01:26.156458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:01:26.360967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:01:26.362299       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d43ad86c-e9e4-4b93-9a95-2a64364c4883", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-652205_327d417b-0f6e-412a-a1f2-45939df284d9 became leader
	I0704 00:01:26.362723       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-652205_327d417b-0f6e-412a-a1f2-45939df284d9!
	I0704 00:01:26.463511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-652205_327d417b-0f6e-412a-a1f2-45939df284d9!
	
	
	==> storage-provisioner [5bef5853b5a4cf91457157db334df7dc11a20e60695c3727db717fc3f15c9642] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:01:28.826606   59857 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18998-9396/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-652205 -n kubernetes-upgrade-652205
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-652205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-652205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-652205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-652205: (1.182955667s)
--- FAIL: TestKubernetesUpgrade (439.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-672261 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-672261 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.706162001s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-672261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-672261" primary control-plane node in "pause-672261" cluster
	* Updating the running kvm2 "pause-672261" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-672261" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:58:11.340788   54840 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:58:11.340904   54840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:11.340912   54840 out.go:304] Setting ErrFile to fd 2...
	I0703 23:58:11.340916   54840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:11.341124   54840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:58:11.341704   54840 out.go:298] Setting JSON to false
	I0703 23:58:11.342658   54840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6031,"bootTime":1720045060,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:58:11.342721   54840 start.go:139] virtualization: kvm guest
	I0703 23:58:11.344873   54840 out.go:177] * [pause-672261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:58:11.346370   54840 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:58:11.346411   54840 notify.go:220] Checking for updates...
	I0703 23:58:11.348870   54840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:58:11.350098   54840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:58:11.351296   54840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:11.352581   54840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:58:11.353853   54840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:58:11.355686   54840 config.go:182] Loaded profile config "pause-672261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:11.356331   54840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:11.356445   54840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:11.373029   54840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43407
	I0703 23:58:11.373448   54840 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:11.374017   54840 main.go:141] libmachine: Using API Version  1
	I0703 23:58:11.374042   54840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:11.374390   54840 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:11.374598   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:11.374906   54840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:58:11.375351   54840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:11.375398   54840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:11.390985   54840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0703 23:58:11.391435   54840 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:11.392015   54840 main.go:141] libmachine: Using API Version  1
	I0703 23:58:11.392067   54840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:11.392523   54840 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:11.392780   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:11.431182   54840 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:58:11.432465   54840 start.go:297] selected driver: kvm2
	I0703 23:58:11.432480   54840 start.go:901] validating driver "kvm2" against &{Name:pause-672261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-672261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:11.432639   54840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:58:11.432985   54840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:11.433065   54840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:58:11.448884   54840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:58:11.449914   54840 cni.go:84] Creating CNI manager for ""
	I0703 23:58:11.449938   54840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:11.450021   54840 start.go:340] cluster config:
	{Name:pause-672261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-672261 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:11.450284   54840 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:11.452072   54840 out.go:177] * Starting "pause-672261" primary control-plane node in "pause-672261" cluster
	I0703 23:58:11.453274   54840 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:11.453322   54840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:58:11.453335   54840 cache.go:56] Caching tarball of preloaded images
	I0703 23:58:11.453428   54840 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:58:11.453445   54840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:58:11.453659   54840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/config.json ...
	I0703 23:58:11.453928   54840 start.go:360] acquireMachinesLock for pause-672261: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:58:21.737026   54840 start.go:364] duration metric: took 10.283064455s to acquireMachinesLock for "pause-672261"
	I0703 23:58:21.737080   54840 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:58:21.737101   54840 fix.go:54] fixHost starting: 
	I0703 23:58:21.737527   54840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:21.737578   54840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:21.754924   54840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0703 23:58:21.755402   54840 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:21.756068   54840 main.go:141] libmachine: Using API Version  1
	I0703 23:58:21.756093   54840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:21.756437   54840 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:21.756627   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:21.756773   54840 main.go:141] libmachine: (pause-672261) Calling .GetState
	I0703 23:58:21.758496   54840 fix.go:112] recreateIfNeeded on pause-672261: state=Running err=<nil>
	W0703 23:58:21.758521   54840 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:58:21.760687   54840 out.go:177] * Updating the running kvm2 "pause-672261" VM ...
	I0703 23:58:21.761829   54840 machine.go:94] provisionDockerMachine start ...
	I0703 23:58:21.761851   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:21.762091   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:21.764811   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:21.800754   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:21.800785   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:21.801043   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:21.801272   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:21.801437   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:21.801609   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:21.801795   54840 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:21.802056   54840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0703 23:58:21.802070   54840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:58:21.926138   54840 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-672261
	
	I0703 23:58:21.926167   54840 main.go:141] libmachine: (pause-672261) Calling .GetMachineName
	I0703 23:58:21.926531   54840 buildroot.go:166] provisioning hostname "pause-672261"
	I0703 23:58:21.926556   54840 main.go:141] libmachine: (pause-672261) Calling .GetMachineName
	I0703 23:58:21.926751   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:22.172218   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.172623   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.172668   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.172904   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:22.173125   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.173295   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.173425   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:22.173598   54840 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:22.173784   54840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0703 23:58:22.173799   54840 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-672261 && echo "pause-672261" | sudo tee /etc/hostname
	I0703 23:58:22.309662   54840 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-672261
	
	I0703 23:58:22.309692   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:22.312862   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.313321   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.313352   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.313522   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:22.313707   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.313855   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.313966   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:22.314125   54840 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:22.314350   54840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0703 23:58:22.314376   54840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-672261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-672261/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-672261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:58:22.441722   54840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:58:22.441755   54840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:58:22.441781   54840 buildroot.go:174] setting up certificates
	I0703 23:58:22.441792   54840 provision.go:84] configureAuth start
	I0703 23:58:22.441803   54840 main.go:141] libmachine: (pause-672261) Calling .GetMachineName
	I0703 23:58:22.442091   54840 main.go:141] libmachine: (pause-672261) Calling .GetIP
	I0703 23:58:22.445311   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.445770   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.445799   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.445978   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:22.448665   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.449121   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.449158   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.449466   54840 provision.go:143] copyHostCerts
	I0703 23:58:22.449520   54840 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:58:22.449532   54840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:58:22.449596   54840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:58:22.449735   54840 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:58:22.449746   54840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:58:22.449785   54840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:58:22.449878   54840 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:58:22.449886   54840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:58:22.449909   54840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:58:22.449999   54840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.pause-672261 san=[127.0.0.1 192.168.61.246 localhost minikube pause-672261]
	I0703 23:58:22.687545   54840 provision.go:177] copyRemoteCerts
	I0703 23:58:22.687599   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:58:22.687619   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:22.690602   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.690965   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.691013   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.691299   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:22.691584   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.691778   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:22.691917   54840 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/pause-672261/id_rsa Username:docker}
	I0703 23:58:22.789356   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:58:22.820389   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0703 23:58:22.849957   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 23:58:22.878466   54840 provision.go:87] duration metric: took 436.659647ms to configureAuth
	I0703 23:58:22.878498   54840 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:58:22.878761   54840 config.go:182] Loaded profile config "pause-672261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:22.878860   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:22.881909   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.882389   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:22.882416   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:22.882596   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:22.882796   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.882968   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:22.883151   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:22.883323   54840 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:22.883539   54840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0703 23:58:22.883555   54840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:58:30.024071   54840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:58:30.024100   54840 machine.go:97] duration metric: took 8.262256646s to provisionDockerMachine
	I0703 23:58:30.024115   54840 start.go:293] postStartSetup for "pause-672261" (driver="kvm2")
	I0703 23:58:30.024128   54840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:58:30.024150   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:30.024469   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:58:30.024506   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:30.027623   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.028011   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:30.028046   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.028232   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:30.028446   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:30.028623   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:30.028761   54840 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/pause-672261/id_rsa Username:docker}
	I0703 23:58:30.121361   54840 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:58:30.127949   54840 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:58:30.127977   54840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:58:30.128043   54840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:58:30.128135   54840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:58:30.128254   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:58:30.141888   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:58:30.172511   54840 start.go:296] duration metric: took 148.382992ms for postStartSetup
	I0703 23:58:30.172551   54840 fix.go:56] duration metric: took 8.435460331s for fixHost
	I0703 23:58:30.172574   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:30.175733   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.176090   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:30.176132   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.176355   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:30.176578   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:30.176722   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:30.176814   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:30.176931   54840 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:30.177107   54840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0703 23:58:30.177125   54840 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0703 23:58:30.298181   54840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051110.291391362
	
	I0703 23:58:30.298205   54840 fix.go:216] guest clock: 1720051110.291391362
	I0703 23:58:30.298215   54840 fix.go:229] Guest: 2024-07-03 23:58:30.291391362 +0000 UTC Remote: 2024-07-03 23:58:30.172556079 +0000 UTC m=+18.869374379 (delta=118.835283ms)
	I0703 23:58:30.298374   54840 fix.go:200] guest clock delta is within tolerance: 118.835283ms
	I0703 23:58:30.298382   54840 start.go:83] releasing machines lock for "pause-672261", held for 8.561322855s
	I0703 23:58:30.298408   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:30.298671   54840 main.go:141] libmachine: (pause-672261) Calling .GetIP
	I0703 23:58:30.301356   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.301806   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:30.301826   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.301981   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:30.302576   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:30.302730   54840 main.go:141] libmachine: (pause-672261) Calling .DriverName
	I0703 23:58:30.302852   54840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:58:30.302888   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:30.302937   54840 ssh_runner.go:195] Run: cat /version.json
	I0703 23:58:30.302961   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHHostname
	I0703 23:58:30.305629   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.305750   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.305994   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:30.306019   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.306093   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:30.306098   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:30.306117   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:30.306265   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:30.306293   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHPort
	I0703 23:58:30.306465   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:30.306510   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHKeyPath
	I0703 23:58:30.306632   54840 main.go:141] libmachine: (pause-672261) Calling .GetSSHUsername
	I0703 23:58:30.306640   54840 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/pause-672261/id_rsa Username:docker}
	I0703 23:58:30.306731   54840 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/pause-672261/id_rsa Username:docker}
	I0703 23:58:30.394206   54840 ssh_runner.go:195] Run: systemctl --version
	I0703 23:58:30.420051   54840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:58:30.583850   54840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:58:30.635465   54840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:58:30.635547   54840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:58:30.656848   54840 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 23:58:30.656873   54840 start.go:494] detecting cgroup driver to use...
	I0703 23:58:30.656932   54840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:58:30.683720   54840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:58:30.819116   54840 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:58:30.819172   54840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:58:31.025406   54840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:58:31.129553   54840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:58:31.412707   54840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:58:31.750728   54840 docker.go:233] disabling docker service ...
	I0703 23:58:31.750806   54840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:58:31.788240   54840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:58:31.805551   54840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:58:32.065054   54840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:58:32.346963   54840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:58:32.376948   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:58:32.405756   54840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:58:32.405817   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.429052   54840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:58:32.429105   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.450649   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.467262   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.482008   54840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:58:32.496953   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.518092   54840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.534466   54840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:32.550000   54840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:58:32.566613   54840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:58:32.582203   54840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:58:32.834925   54840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:58:43.308668   54840 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.473709249s)
	I0703 23:58:43.308702   54840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:58:43.308757   54840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:58:43.315262   54840 start.go:562] Will wait 60s for crictl version
	I0703 23:58:43.315327   54840 ssh_runner.go:195] Run: which crictl
	I0703 23:58:43.320279   54840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:58:43.370468   54840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:58:43.370549   54840 ssh_runner.go:195] Run: crio --version
	I0703 23:58:43.402729   54840 ssh_runner.go:195] Run: crio --version
	I0703 23:58:43.440282   54840 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:58:43.441742   54840 main.go:141] libmachine: (pause-672261) Calling .GetIP
	I0703 23:58:43.444614   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:43.445026   54840 main.go:141] libmachine: (pause-672261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:df:85", ip: ""} in network mk-pause-672261: {Iface:virbr3 ExpiryTime:2024-07-04 00:57:29 +0000 UTC Type:0 Mac:52:54:00:cc:df:85 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:pause-672261 Clientid:01:52:54:00:cc:df:85}
	I0703 23:58:43.445056   54840 main.go:141] libmachine: (pause-672261) DBG | domain pause-672261 has defined IP address 192.168.61.246 and MAC address 52:54:00:cc:df:85 in network mk-pause-672261
	I0703 23:58:43.445266   54840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0703 23:58:43.451142   54840 kubeadm.go:877] updating cluster {Name:pause-672261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:pause-672261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:58:43.451316   54840 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:43.451372   54840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:58:43.508443   54840 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:58:43.508469   54840 crio.go:433] Images already preloaded, skipping extraction
	I0703 23:58:43.508521   54840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:58:43.556861   54840 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:58:43.556878   54840 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:58:43.556885   54840 kubeadm.go:928] updating node { 192.168.61.246 8443 v1.30.2 crio true true} ...
	I0703 23:58:43.556987   54840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-672261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:pause-672261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:58:43.557052   54840 ssh_runner.go:195] Run: crio config
	I0703 23:58:43.626060   54840 cni.go:84] Creating CNI manager for ""
	I0703 23:58:43.626078   54840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:43.626086   54840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:58:43.626104   54840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-672261 NodeName:pause-672261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:58:43.626231   54840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-672261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:58:43.626283   54840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:58:43.638848   54840 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:58:43.638932   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:58:43.651052   54840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0703 23:58:43.670379   54840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:58:43.692349   54840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0703 23:58:43.711908   54840 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0703 23:58:43.716496   54840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:58:43.880566   54840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:58:43.952439   54840 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261 for IP: 192.168.61.246
	I0703 23:58:43.952464   54840 certs.go:194] generating shared ca certs ...
	I0703 23:58:43.952483   54840 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:58:43.952647   54840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:58:43.952703   54840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:58:43.952715   54840 certs.go:256] generating profile certs ...
	I0703 23:58:43.952841   54840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.key
	I0703 23:58:43.952923   54840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/apiserver.key.4122a823
	I0703 23:58:43.952978   54840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/proxy-client.key
	I0703 23:58:43.953148   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:58:43.953186   54840 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:58:43.953199   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:58:43.953234   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:58:43.953280   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:58:43.953324   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:58:43.953384   54840 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:58:43.954303   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:58:44.085626   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:58:44.191929   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:58:44.437580   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:58:44.487722   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0703 23:58:44.561226   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 23:58:44.820857   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:58:45.028328   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0703 23:58:45.210220   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:58:45.302165   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:58:45.359010   54840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:58:45.391185   54840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:58:45.425615   54840 ssh_runner.go:195] Run: openssl version
	I0703 23:58:45.435479   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:58:45.457636   54840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:58:45.465072   54840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:58:45.465137   54840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:58:45.476160   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:58:45.489841   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:58:45.512345   54840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:58:45.523294   54840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:58:45.523359   54840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:58:45.531474   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:58:45.557268   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:58:45.573295   54840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:58:45.578750   54840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:58:45.578818   54840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:58:45.585398   54840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:58:45.600483   54840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:58:45.609027   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 23:58:45.618026   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 23:58:45.628007   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 23:58:45.639032   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 23:58:45.647464   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 23:58:45.656711   54840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 23:58:45.664457   54840 kubeadm.go:391] StartCluster: {Name:pause-672261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:pause-672261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:45.664621   54840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:58:45.664697   54840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:58:45.715037   54840 cri.go:89] found id: "7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f"
	I0703 23:58:45.715061   54840 cri.go:89] found id: "9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4"
	I0703 23:58:45.715066   54840 cri.go:89] found id: "c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0"
	I0703 23:58:45.715071   54840 cri.go:89] found id: "c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500"
	I0703 23:58:45.715074   54840 cri.go:89] found id: "5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce"
	I0703 23:58:45.715079   54840 cri.go:89] found id: "e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9"
	I0703 23:58:45.715083   54840 cri.go:89] found id: "fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab"
	I0703 23:58:45.715087   54840 cri.go:89] found id: "33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100"
	I0703 23:58:45.715091   54840 cri.go:89] found id: "3c28c7469f35fde19ccf21cb3fe36d58839c03905c69bf409b736ee05f88c8af"
	I0703 23:58:45.715100   54840 cri.go:89] found id: "2fab2daa3f0fc91e1924357505664423588e3dd99aeded4182030e642c7e10e9"
	I0703 23:58:45.715104   54840 cri.go:89] found id: "ccb3edb62ffb109e42cd37f4ae966ccbd8df86fff432d077be51cd225a623d9e"
	I0703 23:58:45.715109   54840 cri.go:89] found id: ""
	I0703 23:58:45.715157   54840 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-672261 -n pause-672261
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-672261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-672261 logs -n 25: (1.575833187s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo docker                         | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo find                           | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo crio                           | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-676605                                     | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p cert-expiration-979438                            | cert-expiration-979438    | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-175902                          | force-systemd-env-175902  | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| stop    | -p kubernetes-upgrade-652205                         | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p force-systemd-flag-163167                         | force-systemd-flag-163167 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652205                         | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:58:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:58:50.255305   57609 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:58:50.255548   57609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:50.255556   57609 out.go:304] Setting ErrFile to fd 2...
	I0703 23:58:50.255560   57609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:50.255757   57609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:58:50.256333   57609 out.go:298] Setting JSON to false
	I0703 23:58:50.257285   57609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6070,"bootTime":1720045060,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:58:50.257348   57609 start.go:139] virtualization: kvm guest
	I0703 23:58:50.259291   57609 out.go:177] * [kubernetes-upgrade-652205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:58:50.260569   57609 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:58:50.260615   57609 notify.go:220] Checking for updates...
	I0703 23:58:50.262769   57609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:58:50.264100   57609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:58:50.265328   57609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:50.266659   57609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:58:50.268004   57609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:58:50.269501   57609 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0703 23:58:50.269882   57609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:50.269959   57609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:50.286433   57609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0703 23:58:50.286860   57609 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:50.287388   57609 main.go:141] libmachine: Using API Version  1
	I0703 23:58:50.287431   57609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:50.287734   57609 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:50.287925   57609 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:58:50.288150   57609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:58:50.288545   57609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:50.288588   57609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:50.304288   57609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0703 23:58:50.304738   57609 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:50.305226   57609 main.go:141] libmachine: Using API Version  1
	I0703 23:58:50.305253   57609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:50.305671   57609 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:50.305864   57609 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:58:50.345713   57609 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:58:50.346998   57609 start.go:297] selected driver: kvm2
	I0703 23:58:50.347020   57609 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:50.347162   57609 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:58:50.348147   57609 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:50.348251   57609 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:58:50.366097   57609 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:58:50.366482   57609 cni.go:84] Creating CNI manager for ""
	I0703 23:58:50.366499   57609 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:50.366538   57609 start.go:340] cluster config:
	{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-652205 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:50.366629   57609 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:50.368443   57609 out.go:177] * Starting "kubernetes-upgrade-652205" primary control-plane node in "kubernetes-upgrade-652205" cluster
	I0703 23:58:50.212962   54840 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f 9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4 c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0 c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500 5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9 fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 3c28c7469f35fde19ccf21cb3fe36d58839c03905c69bf409b736ee05f88c8af 2fab2daa3f0fc91e1924357505664423588e3dd99aeded4182030e642c7e10e9 ccb3edb62ffb109e42cd37f4ae966ccbd8df86fff432d077be51cd225a623d9e: (4.31929901s)
	W0703 23:58:50.213047   54840 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f 9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4 c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0 c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500 5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9 fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 3c28c7469f35fde19ccf21cb3fe36d58839c03905c69bf409b736ee05f88c8af 2fab2daa3f0fc91e1924357505664423588e3dd99aeded4182030e642c7e10e9 ccb3edb62ffb109e42cd37f4ae966ccbd8df86fff432d077be51cd225a623d9e: Process exited with status 1
	stdout:
	7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f
	9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4
	c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0
	c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500
	5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce
	e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9
	fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab
	
	stderr:
	E0703 23:58:50.204329    3568 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": container with ID starting with 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 not found: ID does not exist" containerID="33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100"
	time="2024-07-03T23:58:50Z" level=fatal msg="stopping the container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": rpc error: code = NotFound desc = could not find container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": container with ID starting with 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 not found: ID does not exist"
	I0703 23:58:50.213126   54840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0703 23:58:50.258329   54840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:58:50.270832   54840 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jul  3 23:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul  3 23:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul  3 23:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul  3 23:57 /etc/kubernetes/scheduler.conf
	
	I0703 23:58:50.270891   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:58:50.281804   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:58:50.293328   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:58:50.304354   54840 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:58:50.304407   54840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:58:50.315691   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:58:50.326950   54840 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:58:50.327004   54840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:58:50.338615   54840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:58:50.350637   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:50.415255   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.186754   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:53.365008   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:53.365486   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | unable to find current IP address of domain cert-expiration-979438 in network mk-cert-expiration-979438
	I0703 23:58:53.365501   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | I0703 23:58:53.365445   57214 retry.go:31] will retry after 4.152673896s: waiting for machine to come up
	I0703 23:58:50.188237   57566 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:50.188304   57566 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:58:50.188325   57566 cache.go:56] Caching tarball of preloaded images
	I0703 23:58:50.188409   57566 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:58:50.188423   57566 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:58:50.188557   57566 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/force-systemd-flag-163167/config.json ...
	I0703 23:58:50.188582   57566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/force-systemd-flag-163167/config.json: {Name:mkdfedbba126f12ae6877bcd088b88b4996c2b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:58:50.188744   57566 start.go:360] acquireMachinesLock for force-systemd-flag-163167: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:58:50.369661   57609 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:50.369706   57609 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:58:50.369713   57609 cache.go:56] Caching tarball of preloaded images
	I0703 23:58:50.369775   57609 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:58:50.369785   57609 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:58:50.369866   57609 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/config.json ...
	I0703 23:58:50.370043   57609 start.go:360] acquireMachinesLock for kubernetes-upgrade-652205: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:58:51.402741   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.475786   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.611347   54840 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:58:51.611426   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.111629   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.612162   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.628500   54840 api_server.go:72] duration metric: took 1.017162229s to wait for apiserver process to appear ...
	I0703 23:58:52.628538   54840 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:58:52.628563   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.024599   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 23:58:55.024642   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 23:58:55.024663   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.086472   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.086508   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:55.128628   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.134918   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.134956   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:55.629027   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.637469   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.637504   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:56.129073   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:56.136781   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:56.136813   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:56.629565   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:56.633950   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0703 23:58:56.640448   54840 api_server.go:141] control plane version: v1.30.2
	I0703 23:58:56.640479   54840 api_server.go:131] duration metric: took 4.011932639s to wait for apiserver health ...
	I0703 23:58:56.640498   54840 cni.go:84] Creating CNI manager for ""
	I0703 23:58:56.640507   54840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:56.642579   54840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0703 23:58:58.961057   57566 start.go:364] duration metric: took 8.772289686s to acquireMachinesLock for "force-systemd-flag-163167"
	I0703 23:58:58.961130   57566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163167 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-163167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:58:58.961244   57566 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:58:57.521446   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.522021   57192 main.go:141] libmachine: (cert-expiration-979438) Found IP for machine: 192.168.50.228
	I0703 23:58:57.522048   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has current primary IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.522071   57192 main.go:141] libmachine: (cert-expiration-979438) Reserving static IP address...
	I0703 23:58:57.522389   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | unable to find host DHCP lease matching {name: "cert-expiration-979438", mac: "52:54:00:0a:40:ca", ip: "192.168.50.228"} in network mk-cert-expiration-979438
	I0703 23:58:57.601318   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Getting to WaitForSSH function...
	I0703 23:58:57.601338   57192 main.go:141] libmachine: (cert-expiration-979438) Reserved static IP address: 192.168.50.228
	I0703 23:58:57.601350   57192 main.go:141] libmachine: (cert-expiration-979438) Waiting for SSH to be available...
	I0703 23:58:57.603838   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.604364   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.604395   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.604541   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using SSH client type: external
	I0703 23:58:57.604564   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa (-rw-------)
	I0703 23:58:57.604601   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:58:57.604609   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | About to run SSH command:
	I0703 23:58:57.604620   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | exit 0
	I0703 23:58:57.727977   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | SSH cmd err, output: <nil>: 
	I0703 23:58:57.728299   57192 main.go:141] libmachine: (cert-expiration-979438) KVM machine creation complete!
	I0703 23:58:57.728624   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetConfigRaw
	I0703 23:58:57.729113   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:57.729294   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:57.729452   57192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:58:57.729460   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetState
	I0703 23:58:57.730739   57192 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:58:57.730745   57192 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:58:57.730750   57192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:58:57.730754   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.733147   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.733508   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.733526   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.733693   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.733874   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.734035   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.734145   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.734292   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.734472   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.734477   57192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:58:57.835479   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:58:57.835489   57192 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:58:57.835494   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.838401   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.838725   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.838758   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.838912   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.839115   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.839307   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.839451   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.839579   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.839733   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.839738   57192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:58:57.940728   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:58:57.940812   57192 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:58:57.940820   57192 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:58:57.940828   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:57.941103   57192 buildroot.go:166] provisioning hostname "cert-expiration-979438"
	I0703 23:58:57.941124   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:57.941324   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.944183   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.944538   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.944560   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.944665   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.944842   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.944975   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.945077   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.945218   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.945405   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.945412   57192 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-979438 && echo "cert-expiration-979438" | sudo tee /etc/hostname
	I0703 23:58:58.058668   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-979438
	
	I0703 23:58:58.058689   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.061654   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.062005   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.062024   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.062261   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.062416   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.062561   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.062662   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.062803   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.062969   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.062980   57192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-979438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-979438/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-979438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:58:58.173654   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:58:58.173673   57192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:58:58.173705   57192 buildroot.go:174] setting up certificates
	I0703 23:58:58.173712   57192 provision.go:84] configureAuth start
	I0703 23:58:58.173720   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:58.174031   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.176553   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.176870   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.176879   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.177014   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.179106   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.179457   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.179478   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.179619   57192 provision.go:143] copyHostCerts
	I0703 23:58:58.179671   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:58:58.179677   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:58:58.179740   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:58:58.179812   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:58:58.179815   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:58:58.179836   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:58:58.179908   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:58:58.179913   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:58:58.179938   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:58:58.180007   57192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-979438 san=[127.0.0.1 192.168.50.228 cert-expiration-979438 localhost minikube]
	I0703 23:58:58.285208   57192 provision.go:177] copyRemoteCerts
	I0703 23:58:58.285251   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:58:58.285272   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.287802   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.288094   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.288108   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.288310   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.288486   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.288631   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.288789   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.370497   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0703 23:58:58.397471   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:58:58.423490   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:58:58.449712   57192 provision.go:87] duration metric: took 275.987438ms to configureAuth
	I0703 23:58:58.449731   57192 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:58:58.449892   57192 config.go:182] Loaded profile config "cert-expiration-979438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:58.449949   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.452798   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.453120   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.453142   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.453327   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.453529   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.453697   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.453873   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.454080   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.454284   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.454298   57192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:58:58.723696   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:58:58.723710   57192 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:58:58.723734   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetURL
	I0703 23:58:58.724945   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using libvirt version 6000000
	I0703 23:58:58.726991   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.727371   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.727387   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.727585   57192 main.go:141] libmachine: Docker is up and running!
	I0703 23:58:58.727593   57192 main.go:141] libmachine: Reticulating splines...
	I0703 23:58:58.727598   57192 client.go:171] duration metric: took 24.012516368s to LocalClient.Create
	I0703 23:58:58.727616   57192 start.go:167] duration metric: took 24.012564811s to libmachine.API.Create "cert-expiration-979438"
	I0703 23:58:58.727622   57192 start.go:293] postStartSetup for "cert-expiration-979438" (driver="kvm2")
	I0703 23:58:58.727630   57192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:58:58.727642   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.727887   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:58:58.727908   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.729885   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.730192   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.730203   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.730399   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.730593   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.730726   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.730835   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.811372   57192 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:58:58.815919   57192 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:58:58.815939   57192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:58:58.816001   57192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:58:58.816102   57192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:58:58.816206   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:58:58.827061   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:58:58.853187   57192 start.go:296] duration metric: took 125.55416ms for postStartSetup
	I0703 23:58:58.853222   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetConfigRaw
	I0703 23:58:58.853831   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.856556   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.856958   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.856981   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.857318   57192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/config.json ...
	I0703 23:58:58.857554   57192 start.go:128] duration metric: took 24.160347157s to createHost
	I0703 23:58:58.857579   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.859573   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.859839   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.859857   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.860004   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.860204   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.860365   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.860505   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.860641   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.860797   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.860805   57192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:58:58.960928   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051138.935558754
	
	I0703 23:58:58.960952   57192 fix.go:216] guest clock: 1720051138.935558754
	I0703 23:58:58.960957   57192 fix.go:229] Guest: 2024-07-03 23:58:58.935558754 +0000 UTC Remote: 2024-07-03 23:58:58.857563509 +0000 UTC m=+24.262469996 (delta=77.995245ms)
	I0703 23:58:58.960974   57192 fix.go:200] guest clock delta is within tolerance: 77.995245ms
	I0703 23:58:58.960977   57192 start.go:83] releasing machines lock for "cert-expiration-979438", held for 24.263843573s
	I0703 23:58:58.961001   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.961328   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.964292   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.964675   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.964694   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.964879   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965384   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965569   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965652   57192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:58:58.965693   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.965767   57192 ssh_runner.go:195] Run: cat /version.json
	I0703 23:58:58.965786   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.968515   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.968826   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.968948   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.968969   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.969183   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.969260   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.969281   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.969433   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.969468   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.969579   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.969586   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.969717   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.969727   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.969855   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:59.078807   57192 ssh_runner.go:195] Run: systemctl --version
	I0703 23:58:59.085212   57192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:58:59.251350   57192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:58:59.258049   57192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:58:59.258114   57192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:58:59.275129   57192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:58:59.275143   57192 start.go:494] detecting cgroup driver to use...
	I0703 23:58:59.275205   57192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:58:59.292357   57192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:58:59.306476   57192 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:58:59.306530   57192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:58:59.322626   57192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:58:59.337180   57192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:58:59.466808   57192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:58:59.625879   57192 docker.go:233] disabling docker service ...
	I0703 23:58:59.625942   57192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:58:58.963651   57566 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0703 23:58:58.963889   57566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:58.963953   57566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:58.982624   57566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0703 23:58:58.983070   57566 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:58.983668   57566 main.go:141] libmachine: Using API Version  1
	I0703 23:58:58.983693   57566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:58.984083   57566 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:58.984302   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .GetMachineName
	I0703 23:58:58.984477   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .DriverName
	I0703 23:58:58.984654   57566 start.go:159] libmachine.API.Create for "force-systemd-flag-163167" (driver="kvm2")
	I0703 23:58:58.984688   57566 client.go:168] LocalClient.Create starting
	I0703 23:58:58.984720   57566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:58:58.984762   57566 main.go:141] libmachine: Decoding PEM data...
	I0703 23:58:58.984784   57566 main.go:141] libmachine: Parsing certificate...
	I0703 23:58:58.984860   57566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:58:58.984895   57566 main.go:141] libmachine: Decoding PEM data...
	I0703 23:58:58.984914   57566 main.go:141] libmachine: Parsing certificate...
	I0703 23:58:58.984934   57566 main.go:141] libmachine: Running pre-create checks...
	I0703 23:58:58.984943   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .PreCreateCheck
	I0703 23:58:58.985375   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .GetConfigRaw
	I0703 23:58:58.985822   57566 main.go:141] libmachine: Creating machine...
	I0703 23:58:58.985840   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .Create
	I0703 23:58:58.986016   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating KVM machine...
	I0703 23:58:58.987434   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | found existing default KVM network
	I0703 23:58:58.988855   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.988572   57703 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:41:7f} reservation:<nil>}
	I0703 23:58:58.989641   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.989561   57703 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:85:96} reservation:<nil>}
	I0703 23:58:58.990597   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.990512   57703 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:6f:e3} reservation:<nil>}
	I0703 23:58:58.991589   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.991507   57703 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00032d590}
	I0703 23:58:58.991610   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | created network xml: 
	I0703 23:58:58.991623   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | <network>
	I0703 23:58:58.991632   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <name>mk-force-systemd-flag-163167</name>
	I0703 23:58:58.991643   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <dns enable='no'/>
	I0703 23:58:58.991654   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   
	I0703 23:58:58.991662   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0703 23:58:58.991671   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |     <dhcp>
	I0703 23:58:58.991691   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0703 23:58:58.991712   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |     </dhcp>
	I0703 23:58:58.991725   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   </ip>
	I0703 23:58:58.991749   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   
	I0703 23:58:58.991761   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | </network>
	I0703 23:58:58.991771   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | 
	I0703 23:58:58.997355   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | trying to create private KVM network mk-force-systemd-flag-163167 192.168.72.0/24...
	I0703 23:58:59.073036   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | private KVM network mk-force-systemd-flag-163167 192.168.72.0/24 created
	I0703 23:58:59.073066   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.072982   57703 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:59.073080   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 ...
	I0703 23:58:59.073098   57566 main.go:141] libmachine: (force-systemd-flag-163167) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:58:59.073289   57566 main.go:141] libmachine: (force-systemd-flag-163167) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:58:59.314364   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.314217   57703 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/id_rsa...
	I0703 23:58:59.399285   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.399112   57703 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/force-systemd-flag-163167.rawdisk...
	I0703 23:58:59.399323   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Writing magic tar header
	I0703 23:58:59.399342   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Writing SSH key tar header
	I0703 23:58:59.399357   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.399231   57703 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 ...
	I0703 23:58:59.399372   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 (perms=drwx------)
	I0703 23:58:59.399389   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167
	I0703 23:58:59.399401   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:58:59.399414   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:59.399429   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:58:59.399439   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:58:59.399450   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:58:59.399488   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:58:59.399497   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home
	I0703 23:58:59.399513   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Skipping /home - not owner
	I0703 23:58:59.399528   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:58:59.399541   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:58:59.399556   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:58:59.399568   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:58:59.399645   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating domain...
	I0703 23:58:59.400682   57566 main.go:141] libmachine: (force-systemd-flag-163167) define libvirt domain using xml: 
	I0703 23:58:59.400708   57566 main.go:141] libmachine: (force-systemd-flag-163167) <domain type='kvm'>
	I0703 23:58:59.400727   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <name>force-systemd-flag-163167</name>
	I0703 23:58:59.400738   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <memory unit='MiB'>2048</memory>
	I0703 23:58:59.400745   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <vcpu>2</vcpu>
	I0703 23:58:59.400755   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <features>
	I0703 23:58:59.400788   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <acpi/>
	I0703 23:58:59.400811   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <apic/>
	I0703 23:58:59.400821   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <pae/>
	I0703 23:58:59.400836   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.400846   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </features>
	I0703 23:58:59.400858   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <cpu mode='host-passthrough'>
	I0703 23:58:59.400880   57566 main.go:141] libmachine: (force-systemd-flag-163167)   
	I0703 23:58:59.400891   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </cpu>
	I0703 23:58:59.400899   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <os>
	I0703 23:58:59.400911   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <type>hvm</type>
	I0703 23:58:59.400920   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <boot dev='cdrom'/>
	I0703 23:58:59.400931   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <boot dev='hd'/>
	I0703 23:58:59.400945   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <bootmenu enable='no'/>
	I0703 23:58:59.400955   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </os>
	I0703 23:58:59.400963   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <devices>
	I0703 23:58:59.400975   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <disk type='file' device='cdrom'>
	I0703 23:58:59.401004   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/boot2docker.iso'/>
	I0703 23:58:59.401028   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target dev='hdc' bus='scsi'/>
	I0703 23:58:59.401045   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <readonly/>
	I0703 23:58:59.401061   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </disk>
	I0703 23:58:59.401077   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <disk type='file' device='disk'>
	I0703 23:58:59.401089   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:58:59.401102   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/force-systemd-flag-163167.rawdisk'/>
	I0703 23:58:59.401112   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target dev='hda' bus='virtio'/>
	I0703 23:58:59.401122   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </disk>
	I0703 23:58:59.401135   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <interface type='network'>
	I0703 23:58:59.401149   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source network='mk-force-systemd-flag-163167'/>
	I0703 23:58:59.401167   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <model type='virtio'/>
	I0703 23:58:59.401178   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </interface>
	I0703 23:58:59.401185   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <interface type='network'>
	I0703 23:58:59.401192   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source network='default'/>
	I0703 23:58:59.401201   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <model type='virtio'/>
	I0703 23:58:59.401213   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </interface>
	I0703 23:58:59.401223   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <serial type='pty'>
	I0703 23:58:59.401234   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target port='0'/>
	I0703 23:58:59.401246   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </serial>
	I0703 23:58:59.401260   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <console type='pty'>
	I0703 23:58:59.401272   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target type='serial' port='0'/>
	I0703 23:58:59.401281   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </console>
	I0703 23:58:59.401287   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <rng model='virtio'>
	I0703 23:58:59.401297   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <backend model='random'>/dev/random</backend>
	I0703 23:58:59.401309   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </rng>
	I0703 23:58:59.401316   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.401328   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.401342   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </devices>
	I0703 23:58:59.401353   57566 main.go:141] libmachine: (force-systemd-flag-163167) </domain>
	I0703 23:58:59.401362   57566 main.go:141] libmachine: (force-systemd-flag-163167) 
	I0703 23:58:59.405781   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:d0:8f:95 in network default
	I0703 23:58:59.406383   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring networks are active...
	I0703 23:58:59.406416   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:58:59.407082   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring network default is active
	I0703 23:58:59.407396   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring network mk-force-systemd-flag-163167 is active
	I0703 23:58:59.408133   57566 main.go:141] libmachine: (force-systemd-flag-163167) Getting domain xml...
	I0703 23:58:59.408940   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating domain...
	I0703 23:58:59.641908   57192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:58:59.656324   57192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:58:59.776567   57192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:58:59.885999   57192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:58:59.900557   57192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:58:59.921211   57192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:58:59.921261   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.933116   57192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:58:59.933173   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.945348   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.957311   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.971208   57192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:58:59.985477   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.997747   57192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:59:00.017657   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:59:00.030004   57192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:59:00.042606   57192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:59:00.042657   57192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:59:00.059516   57192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:59:00.071318   57192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:00.207164   57192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:59:00.356819   57192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:59:00.356904   57192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:59:00.363142   57192 start.go:562] Will wait 60s for crictl version
	I0703 23:59:00.363185   57192 ssh_runner.go:195] Run: which crictl
	I0703 23:59:00.367814   57192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:59:00.417887   57192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:59:00.417949   57192 ssh_runner.go:195] Run: crio --version
	I0703 23:59:00.455339   57192 ssh_runner.go:195] Run: crio --version
	I0703 23:59:00.492828   57192 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:58:56.644203   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0703 23:58:56.655181   54840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0703 23:58:56.675668   54840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:58:56.685835   54840 system_pods.go:59] 6 kube-system pods found
	I0703 23:58:56.685878   54840 system_pods.go:61] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0703 23:58:56.685899   54840 system_pods.go:61] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0703 23:58:56.685912   54840 system_pods.go:61] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0703 23:58:56.685922   54840 system_pods.go:61] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0703 23:58:56.685934   54840 system_pods.go:61] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0703 23:58:56.685943   54840 system_pods.go:61] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0703 23:58:56.685954   54840 system_pods.go:74] duration metric: took 10.264366ms to wait for pod list to return data ...
	I0703 23:58:56.685966   54840 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:58:56.689513   54840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:58:56.689551   54840 node_conditions.go:123] node cpu capacity is 2
	I0703 23:58:56.689562   54840 node_conditions.go:105] duration metric: took 3.587693ms to run NodePressure ...
	I0703 23:58:56.689583   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:56.956340   54840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0703 23:58:56.961857   54840 kubeadm.go:733] kubelet initialised
	I0703 23:58:56.961882   54840 kubeadm.go:734] duration metric: took 5.516107ms waiting for restarted kubelet to initialise ...
	I0703 23:58:56.961889   54840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:58:56.967400   54840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:58:58.976088   54840 pod_ready.go:102] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:00.477618   54840 pod_ready.go:92] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:00.477648   54840 pod_ready.go:81] duration metric: took 3.510211849s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:00.477661   54840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:00.493984   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:59:00.496931   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:59:00.497382   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:59:00.497404   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:59:00.497650   57192 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0703 23:59:00.502422   57192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:59:00.517913   57192 kubeadm.go:877] updating cluster {Name:cert-expiration-979438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:59:00.518027   57192 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:59:00.518076   57192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:59:00.553082   57192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 23:59:00.553142   57192 ssh_runner.go:195] Run: which lz4
	I0703 23:59:00.557593   57192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 23:59:00.562226   57192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:59:00.562262   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 23:59:02.138094   57192 crio.go:462] duration metric: took 1.580554394s to copy over tarball
	I0703 23:59:02.138172   57192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:59:04.462141   57192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323934507s)
	I0703 23:59:04.462158   57192 crio.go:469] duration metric: took 2.324044467s to extract the tarball
	I0703 23:59:04.462164   57192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:59:04.502953   57192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:59:04.558820   57192 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:59:04.558832   57192 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:59:04.558841   57192 kubeadm.go:928] updating node { 192.168.50.228 8443 v1.30.2 crio true true} ...
	I0703 23:59:04.558957   57192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-979438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:59:04.559029   57192 ssh_runner.go:195] Run: crio config
	I0703 23:59:04.611916   57192 cni.go:84] Creating CNI manager for ""
	I0703 23:59:04.611927   57192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:59:04.611936   57192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:59:04.611955   57192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-979438 NodeName:cert-expiration-979438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:59:04.612083   57192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-979438"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:59:04.612133   57192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:59:04.624575   57192 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:59:04.624643   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:59:00.707946   57566 main.go:141] libmachine: (force-systemd-flag-163167) Waiting to get IP...
	I0703 23:59:00.708961   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:00.709504   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:00.709530   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:00.709479   57703 retry.go:31] will retry after 267.946778ms: waiting for machine to come up
	I0703 23:59:00.978994   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:00.979670   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:00.979698   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:00.979571   57703 retry.go:31] will retry after 345.324288ms: waiting for machine to come up
	I0703 23:59:01.326285   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:01.326874   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:01.326906   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:01.326833   57703 retry.go:31] will retry after 419.499046ms: waiting for machine to come up
	I0703 23:59:01.748567   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:01.749174   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:01.749203   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:01.749124   57703 retry.go:31] will retry after 556.137903ms: waiting for machine to come up
	I0703 23:59:02.306918   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:02.307506   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:02.307537   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:02.307460   57703 retry.go:31] will retry after 649.917041ms: waiting for machine to come up
	I0703 23:59:02.959445   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:02.960038   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:02.960084   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:02.960003   57703 retry.go:31] will retry after 794.895153ms: waiting for machine to come up
	I0703 23:59:03.757110   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:03.757541   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:03.757582   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:03.757482   57703 retry.go:31] will retry after 776.803905ms: waiting for machine to come up
	I0703 23:59:04.535906   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:04.536431   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:04.536467   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:04.536378   57703 retry.go:31] will retry after 1.082302032s: waiting for machine to come up
	I0703 23:59:04.637249   57192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0703 23:59:04.656771   57192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:59:04.679226   57192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0703 23:59:04.700519   57192 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0703 23:59:04.704927   57192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:59:04.718495   57192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:04.841871   57192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:59:04.859582   57192 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438 for IP: 192.168.50.228
	I0703 23:59:04.859595   57192 certs.go:194] generating shared ca certs ...
	I0703 23:59:04.859611   57192 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.859765   57192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:59:04.859816   57192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:59:04.859822   57192 certs.go:256] generating profile certs ...
	I0703 23:59:04.859901   57192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key
	I0703 23:59:04.859915   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt with IP's: []
	I0703 23:59:04.945336   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt ...
	I0703 23:59:04.945351   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt: {Name:mk399cb2dae8105ed1902e0f47263082fe5df105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.945517   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key ...
	I0703 23:59:04.945524   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key: {Name:mkee6fbeada8ea37945b7ad3f308e3c43c86f1ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.945613   57192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043
	I0703 23:59:04.945629   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.228]
	I0703 23:59:05.026039   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 ...
	I0703 23:59:05.026052   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043: {Name:mkb96166ec96153be78e8225d04bd1f39244918a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.026221   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043 ...
	I0703 23:59:05.026231   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043: {Name:mk58b6d72e36e55b25d24dee639b49b1c44909aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.026331   57192 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt
	I0703 23:59:05.026440   57192 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key
	I0703 23:59:05.026490   57192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key
	I0703 23:59:05.026509   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt with IP's: []
	I0703 23:59:05.094930   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt ...
	I0703 23:59:05.094945   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt: {Name:mk416436ec02cd70be288c16c8458158d55d19b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.095115   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key ...
	I0703 23:59:05.095126   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key: {Name:mk3dc972ebe25f725c53ad4ddd6fe0a0001e44ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.095319   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:59:05.095352   57192 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:59:05.095358   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:59:05.095378   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:59:05.095399   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:59:05.095417   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:59:05.095449   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:59:05.096016   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:59:05.126388   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:59:05.153824   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:59:05.180713   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:59:05.207459   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0703 23:59:05.233606   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:59:05.260548   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:59:05.287767   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0703 23:59:05.319759   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:59:05.350262   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:59:05.379006   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:59:05.405308   57192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:59:05.425155   57192 ssh_runner.go:195] Run: openssl version
	I0703 23:59:05.431716   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:59:05.443776   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.448814   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.448870   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.455108   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:59:05.466618   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:59:05.479147   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.484646   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.484706   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.493090   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:59:05.504780   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:59:05.516485   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.521346   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.521422   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.527885   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:59:05.540541   57192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:59:05.544976   57192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:59:05.545025   57192 kubeadm.go:391] StartCluster: {Name:cert-expiration-979438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:59:05.545103   57192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:59:05.545153   57192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:59:05.593863   57192 cri.go:89] found id: ""
	I0703 23:59:05.593920   57192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:59:05.608699   57192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:59:05.619514   57192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:59:05.630907   57192 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:59:05.630918   57192 kubeadm.go:156] found existing configuration files:
	
	I0703 23:59:05.630970   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:59:05.646026   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:59:05.646082   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:59:05.657909   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:59:05.672230   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:59:05.672285   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:59:05.689629   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:59:05.705297   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:59:05.705354   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:59:05.721441   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:59:05.740447   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:59:05.740494   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:59:05.753170   57192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:59:05.887687   57192 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 23:59:05.887756   57192 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:59:06.016203   57192 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:59:06.016340   57192 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:59:06.016482   57192 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:59:06.253973   57192 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:59:02.485248   54840 pod_ready.go:102] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:04.985945   54840 pod_ready.go:102] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:06.127479   54840 pod_ready.go:92] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:06.127513   54840 pod_ready.go:81] duration metric: took 5.64984387s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:06.127526   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:06.381241   57192 out.go:204]   - Generating certificates and keys ...
	I0703 23:59:06.381381   57192 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:59:06.381481   57192 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:59:06.381584   57192 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:59:06.758975   57192 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:59:06.994052   57192 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:59:07.065551   57192 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:59:07.234326   57192 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:59:07.234606   57192 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-979438 localhost] and IPs [192.168.50.228 127.0.0.1 ::1]
	I0703 23:59:07.349792   57192 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:59:07.350114   57192 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-979438 localhost] and IPs [192.168.50.228 127.0.0.1 ::1]
	I0703 23:59:07.423173   57192 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:59:07.534649   57192 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:59:07.870471   57192 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:59:07.870727   57192 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:59:08.027173   57192 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:59:08.327411   57192 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 23:59:08.646463   57192 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:59:08.719933   57192 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:59:08.994429   57192 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:59:08.995194   57192 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:59:08.998531   57192 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:59:09.000372   57192 out.go:204]   - Booting up control plane ...
	I0703 23:59:09.000533   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:59:09.000631   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:59:09.000721   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:59:09.018888   57192 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:59:09.021117   57192 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:59:09.021444   57192 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:59:09.192671   57192 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 23:59:09.192803   57192 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 23:59:05.620585   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:05.621090   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:05.621120   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:05.621037   57703 retry.go:31] will retry after 1.318972199s: waiting for machine to come up
	I0703 23:59:06.942425   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:06.942966   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:06.942996   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:06.942901   57703 retry.go:31] will retry after 1.562982371s: waiting for machine to come up
	I0703 23:59:08.507923   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:08.508467   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:08.508496   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:08.508394   57703 retry.go:31] will retry after 2.243399611s: waiting for machine to come up
	I0703 23:59:08.165090   54840 pod_ready.go:102] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:09.136886   54840 pod_ready.go:92] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:09.136915   54840 pod_ready.go:81] duration metric: took 3.009380676s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:09.136930   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.145543   54840 pod_ready.go:92] pod "kube-controller-manager-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.145577   54840 pod_ready.go:81] duration metric: took 2.008637058s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.145592   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.152306   54840 pod_ready.go:92] pod "kube-proxy-mwcv2" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.152334   54840 pod_ready.go:81] duration metric: took 6.732867ms for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.152345   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.158254   54840 pod_ready.go:92] pod "kube-scheduler-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.158280   54840 pod_ready.go:81] duration metric: took 5.926224ms for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.158290   54840 pod_ready.go:38] duration metric: took 14.196391498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:11.158310   54840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 23:59:11.174513   54840 ops.go:34] apiserver oom_adj: -16
	I0703 23:59:11.174538   54840 kubeadm.go:591] duration metric: took 25.376911693s to restartPrimaryControlPlane
	I0703 23:59:11.174551   54840 kubeadm.go:393] duration metric: took 25.510117163s to StartCluster
	I0703 23:59:11.174571   54840 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:11.174653   54840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:59:11.175547   54840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:11.175822   54840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:59:11.175903   54840 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 23:59:11.176087   54840 config.go:182] Loaded profile config "pause-672261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:59:11.177603   54840 out.go:177] * Enabled addons: 
	I0703 23:59:11.177621   54840 out.go:177] * Verifying Kubernetes components...
	I0703 23:59:11.178771   54840 addons.go:510] duration metric: took 2.89453ms for enable addons: enabled=[]
	I0703 23:59:11.178808   54840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:11.350953   54840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:59:11.376486   54840 node_ready.go:35] waiting up to 6m0s for node "pause-672261" to be "Ready" ...
	I0703 23:59:11.380948   54840 node_ready.go:49] node "pause-672261" has status "Ready":"True"
	I0703 23:59:11.380973   54840 node_ready.go:38] duration metric: took 4.447717ms for node "pause-672261" to be "Ready" ...
	I0703 23:59:11.380985   54840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:11.387275   54840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.397099   54840 pod_ready.go:92] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.397138   54840 pod_ready.go:81] duration metric: took 9.825349ms for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.397152   54840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.541284   54840 pod_ready.go:92] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.541318   54840 pod_ready.go:81] duration metric: took 144.157412ms for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.541331   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.941803   54840 pod_ready.go:92] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.941846   54840 pod_ready.go:81] duration metric: took 400.497476ms for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.941861   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.342069   54840 pod_ready.go:92] pod "kube-controller-manager-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:12.342104   54840 pod_ready.go:81] duration metric: took 400.233619ms for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.342118   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.742488   54840 pod_ready.go:92] pod "kube-proxy-mwcv2" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:12.742511   54840 pod_ready.go:81] duration metric: took 400.386168ms for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.742521   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:13.142120   54840 pod_ready.go:92] pod "kube-scheduler-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:13.142148   54840 pod_ready.go:81] duration metric: took 399.617536ms for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:13.142158   54840 pod_ready.go:38] duration metric: took 1.761161854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:13.142184   54840 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:59:13.142248   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:59:13.157984   54840 api_server.go:72] duration metric: took 1.982129135s to wait for apiserver process to appear ...
	I0703 23:59:13.158007   54840 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:59:13.158025   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:59:13.163209   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0703 23:59:13.164528   54840 api_server.go:141] control plane version: v1.30.2
	I0703 23:59:13.164552   54840 api_server.go:131] duration metric: took 6.538504ms to wait for apiserver health ...
	I0703 23:59:13.164563   54840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:59:13.345519   54840 system_pods.go:59] 6 kube-system pods found
	I0703 23:59:13.345545   54840 system_pods.go:61] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running
	I0703 23:59:13.345549   54840 system_pods.go:61] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running
	I0703 23:59:13.345553   54840 system_pods.go:61] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running
	I0703 23:59:13.345556   54840 system_pods.go:61] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running
	I0703 23:59:13.345559   54840 system_pods.go:61] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running
	I0703 23:59:13.345563   54840 system_pods.go:61] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running
	I0703 23:59:13.345569   54840 system_pods.go:74] duration metric: took 180.998586ms to wait for pod list to return data ...
	I0703 23:59:13.345576   54840 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:59:13.542349   54840 default_sa.go:45] found service account: "default"
	I0703 23:59:13.542381   54840 default_sa.go:55] duration metric: took 196.797051ms for default service account to be created ...
	I0703 23:59:13.542393   54840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:59:13.746323   54840 system_pods.go:86] 6 kube-system pods found
	I0703 23:59:13.746369   54840 system_pods.go:89] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running
	I0703 23:59:13.746377   54840 system_pods.go:89] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running
	I0703 23:59:13.746382   54840 system_pods.go:89] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running
	I0703 23:59:13.746387   54840 system_pods.go:89] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running
	I0703 23:59:13.746393   54840 system_pods.go:89] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running
	I0703 23:59:13.746398   54840 system_pods.go:89] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running
	I0703 23:59:13.746406   54840 system_pods.go:126] duration metric: took 204.007379ms to wait for k8s-apps to be running ...
	I0703 23:59:13.746414   54840 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:59:13.746464   54840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:59:13.761982   54840 system_svc.go:56] duration metric: took 15.557058ms WaitForService to wait for kubelet
	I0703 23:59:13.762013   54840 kubeadm.go:576] duration metric: took 2.586161436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:59:13.762052   54840 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:59:13.941957   54840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:59:13.941990   54840 node_conditions.go:123] node cpu capacity is 2
	I0703 23:59:13.942003   54840 node_conditions.go:105] duration metric: took 179.944115ms to run NodePressure ...
	I0703 23:59:13.942016   54840 start.go:240] waiting for startup goroutines ...
	I0703 23:59:13.942041   54840 start.go:245] waiting for cluster config update ...
	I0703 23:59:13.942061   54840 start.go:254] writing updated cluster config ...
	I0703 23:59:13.942389   54840 ssh_runner.go:195] Run: rm -f paused
	I0703 23:59:13.992086   54840 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 23:59:13.993610   54840 out.go:177] * Done! kubectl is now configured to use "pause-672261" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.820779139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d00d16a1-7df6-4c55-92d9-8b2ac3f4d7e1 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.822613004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=061d014a-bb21-4675-bb6c-b5ca12b4f995 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.823116317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051154823002229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=061d014a-bb21-4675-bb6c-b5ca12b4f995 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.823907443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b62b0ee3-083e-4ffe-92a5-8ee68865b4d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.824002200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b62b0ee3-083e-4ffe-92a5-8ee68865b4d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.824678490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b62b0ee3-083e-4ffe-92a5-8ee68865b4d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.836084032Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8a34946c-a908-46f3-a25f-5434114a20a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.836898507Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr5wm,Uid:9b7401eb-5d71-440b-ac85-f1a3ab07de21,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051124384480241,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:58:07.716414900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-672261,Uid:2a228042c0b7cb0706b8ad93d94fda8c,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051124045128050,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a228042c0b7cb0706b8ad93d94fda8c,kubernetes.io/config.seen: 2024-07-03T23:57:53.285601750Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-672261,Uid:29c053fd1ff0fcfc507a0469ecf0ca23,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051124043842491,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0f
cfc507a0469ecf0ca23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29c053fd1ff0fcfc507a0469ecf0ca23,kubernetes.io/config.seen: 2024-07-03T23:57:53.285602942Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-672261,Uid:167f9ef2fae22af1888be0a8dc1afcc1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051124030923518,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.246:8443,kubernetes.io/config.hash: 167f9ef2fae22af1888be0a8dc1afcc1,kubernetes.io/config.seen: 2024-07-03T23:57:53.285599677Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&PodSandboxMetadata{Name:kube-proxy-mwcv2,Uid:ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051123983443376,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-03T23:58:07.609179935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&PodSandboxMetadata{Name:etcd-pause-672261,Uid:0751c40357ae22a4b6fae5c7806f18d4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1720051123910374277,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.246:2379,kubernetes.io/config.hash: 0751c40357ae22a4b6fae5c7806f18d4,kubernetes.io/config.seen: 2024-07-03T23:57:53.285592708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr5wm,Uid:9b7401eb-5d71-440b-ac85-f1a3ab07de21,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1720051110805574755,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-07-03T23:58:07.716414900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8a34946c-a908-46f3-a25f-5434114a20a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.837771120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=149bb88d-d385-4421-a9c8-efc36c932159 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.837872833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=149bb88d-d385-4421-a9c8-efc36c932159 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.838314768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=149bb88d-d385-4421-a9c8-efc36c932159 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.896258596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69cfe2f7-786d-4ce0-968e-11c8ab3f6972 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.896512231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69cfe2f7-786d-4ce0-968e-11c8ab3f6972 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.898810066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ecfe77a-8ad5-4b91-88a0-35c45559287d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.899522672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051154899494007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ecfe77a-8ad5-4b91-88a0-35c45559287d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.900453831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efbea092-2103-49a3-84b6-e4ac168cf600 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.900596586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efbea092-2103-49a3-84b6-e4ac168cf600 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.900930951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efbea092-2103-49a3-84b6-e4ac168cf600 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.954390333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c06a7ac8-2059-4a68-9ef0-48d0271682f6 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.954507723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c06a7ac8-2059-4a68-9ef0-48d0271682f6 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.956297672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78f7b246-8ef4-437b-b58f-b11d70c98c01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.957140233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051154956988411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78f7b246-8ef4-437b-b58f-b11d70c98c01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.958139300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32cef616-fbcf-465f-85de-481c0384a583 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.958331993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32cef616-fbcf-465f-85de-481c0384a583 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:14 pause-672261 crio[2732]: time="2024-07-03 23:59:14.958711202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32cef616-fbcf-465f-85de-481c0384a583 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a2f3ab0edd24       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   19 seconds ago      Running             kube-proxy                3                   049c993bc44f6       kube-proxy-mwcv2
	0da5d6426287b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   69cb978a908bb       coredns-7db6d8ff4d-sr5wm
	bd38fef524a48       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   22 seconds ago      Running             kube-controller-manager   3                   421a1a187b300       kube-controller-manager-pause-672261
	6ec9605f6c153       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   22 seconds ago      Running             kube-apiserver            3                   0646f7df6749b       kube-apiserver-pause-672261
	a6b6112031159       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   22 seconds ago      Running             kube-scheduler            3                   79145a37df3de       kube-scheduler-pause-672261
	44fe9aae4f119       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      3                   919f68f8d50d3       etcd-pause-672261
	7580dd19096e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   30 seconds ago      Exited              kube-proxy                2                   049c993bc44f6       kube-proxy-mwcv2
	9c33b979b20f2       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   30 seconds ago      Exited              kube-scheduler            2                   79145a37df3de       kube-scheduler-pause-672261
	c938b9c08ac5e       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   30 seconds ago      Exited              kube-apiserver            2                   0646f7df6749b       kube-apiserver-pause-672261
	c051fb73177f2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   30 seconds ago      Exited              kube-controller-manager   2                   421a1a187b300       kube-controller-manager-pause-672261
	5083f2257a546       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      2                   919f68f8d50d3       etcd-pause-672261
	e655f54b5cf30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   42 seconds ago      Exited              coredns                   1                   998921b1a491c       coredns-7db6d8ff4d-sr5wm
	
	
	==> coredns [0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38700 - 46158 "HINFO IN 8105696747826790080.5218847225836727451. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014222341s
	
	
	==> coredns [e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59315 - 61730 "HINFO IN 1562211779905980592.447755102183035168. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013974878s
	
	
	==> describe nodes <==
	Name:               pause-672261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-672261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=pause-672261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_57_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:57:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-672261
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:59:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.246
	  Hostname:    pause-672261
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a6b8b1627dd40d68bc0fedd947da1a8
	  System UUID:                1a6b8b16-27dd-40d6-8bc0-fedd947da1a8
	  Boot ID:                    17106a09-9eae-4375-80ae-fcb34e510ff1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sr5wm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     68s
	  kube-system                 etcd-pause-672261                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-pause-672261             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-pause-672261    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-mwcv2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-pause-672261             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s                kubelet          Node pause-672261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet          Node pause-672261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet          Node pause-672261 status is now: NodeHasSufficientPID
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeReady                81s                kubelet          Node pause-672261 status is now: NodeReady
	  Normal  RegisteredNode           69s                node-controller  Node pause-672261 event: Registered Node pause-672261 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-672261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-672261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-672261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-672261 event: Registered Node pause-672261 in Controller
	
	
	==> dmesg <==
	[  +0.076638] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.219354] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.155156] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.370924] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.713468] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.071849] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.146415] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +1.245203] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.644001] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.080501] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.237493] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 3 23:58] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[ +23.116315] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.523332] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.305404] systemd-fstab-generator[2518]: Ignoring "noauto" option for root device
	[  +0.354249] systemd-fstab-generator[2579]: Ignoring "noauto" option for root device
	[  +0.271746] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[  +0.504848] systemd-fstab-generator[2715]: Ignoring "noauto" option for root device
	[ +11.099685] systemd-fstab-generator[2982]: Ignoring "noauto" option for root device
	[  +0.094353] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.278302] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.161808] systemd-fstab-generator[3721]: Ignoring "noauto" option for root device
	[  +4.636638] kauditd_printk_skb: 47 callbacks suppressed
	[Jul 3 23:59] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.990236] systemd-fstab-generator[4194]: Ignoring "noauto" option for root device
	
	
	==> etcd [44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387] <==
	{"level":"info","ts":"2024-07-03T23:59:06.110467Z","caller":"traceutil/trace.go:171","msg":"trace[519604106] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-672261; range_end:; response_count:1; response_revision:444; }","duration":"138.443599ms","start":"2024-07-03T23:59:05.972013Z","end":"2024-07-03T23:59:06.110456Z","steps":["trace[519604106] 'agreement among raft nodes before linearized reading'  (duration: 138.337584ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:06.110711Z","caller":"traceutil/trace.go:171","msg":"trace[1034398405] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"439.712927ms","start":"2024-07-03T23:59:05.670987Z","end":"2024-07-03T23:59:06.1107Z","steps":["trace[1034398405] 'process raft request'  (duration: 439.102893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:06.110815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T23:59:05.670963Z","time spent":"439.793311ms","remote":"127.0.0.1:51820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5477,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-672261\" mod_revision:387 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-672261\" value_size:5425 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-672261\" > >"}
	{"level":"warn","ts":"2024-07-03T23:59:06.598885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.092176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.246\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-03T23:59:06.599002Z","caller":"traceutil/trace.go:171","msg":"trace[1645983398] range","detail":"{range_begin:/registry/masterleases/192.168.61.246; range_end:; response_count:1; response_revision:444; }","duration":"213.19869ms","start":"2024-07-03T23:59:06.385733Z","end":"2024-07-03T23:59:06.598931Z","steps":["trace[1645983398] 'range keys from in-memory index tree'  (duration: 212.956706ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:06.740677Z","caller":"traceutil/trace.go:171","msg":"trace[1574541755] linearizableReadLoop","detail":"{readStateIndex:481; appliedIndex:480; }","duration":"117.316923ms","start":"2024-07-03T23:59:06.623345Z","end":"2024-07-03T23:59:06.740662Z","steps":["trace[1574541755] 'read index received'  (duration: 117.148864ms)","trace[1574541755] 'applied index is now lower than readState.Index'  (duration: 167.119µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:59:06.740787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.428557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-672261\" ","response":"range_response_count:1 size:7003"}
	{"level":"info","ts":"2024-07-03T23:59:06.740806Z","caller":"traceutil/trace.go:171","msg":"trace[1628249616] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-672261; range_end:; response_count:1; response_revision:444; }","duration":"117.491385ms","start":"2024-07-03T23:59:06.623309Z","end":"2024-07-03T23:59:06.740801Z","steps":["trace[1628249616] 'agreement among raft nodes before linearized reading'  (duration: 117.424391ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:06.990229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.791643ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5082471037600801522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.246\" mod_revision:393 > success:<request_put:<key:\"/registry/masterleases/192.168.61.246\" value_size:67 lease:5082471037600801519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.246\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:59:06.990436Z","caller":"traceutil/trace.go:171","msg":"trace[825486895] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"245.85587ms","start":"2024-07-03T23:59:06.744565Z","end":"2024-07-03T23:59:06.990421Z","steps":["trace[825486895] 'read index received'  (duration: 119.709156ms)","trace[825486895] 'applied index is now lower than readState.Index'  (duration: 126.142992ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:59:06.990528Z","caller":"traceutil/trace.go:171","msg":"trace[838349317] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"248.765132ms","start":"2024-07-03T23:59:06.741743Z","end":"2024-07-03T23:59:06.990508Z","steps":["trace[838349317] 'process raft request'  (duration: 122.446551ms)","trace[838349317] 'compare'  (duration: 125.621395ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:59:06.990648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.091299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-672261\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-03T23:59:06.990711Z","caller":"traceutil/trace.go:171","msg":"trace[1337703407] range","detail":"{range_begin:/registry/minions/pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"246.184764ms","start":"2024-07-03T23:59:06.744517Z","end":"2024-07-03T23:59:06.990702Z","steps":["trace[1337703407] 'agreement among raft nodes before linearized reading'  (duration: 245.982294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.401729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.568844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-672261\" ","response":"range_response_count:1 size:4565"}
	{"level":"info","ts":"2024-07-03T23:59:07.401905Z","caller":"traceutil/trace.go:171","msg":"trace[1563463524] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"223.781155ms","start":"2024-07-03T23:59:07.178113Z","end":"2024-07-03T23:59:07.401894Z","steps":["trace[1563463524] 'range keys from in-memory index tree'  (duration: 223.460695ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.401869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.404275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-672261\" ","response":"range_response_count:1 size:7003"}
	{"level":"info","ts":"2024-07-03T23:59:07.402385Z","caller":"traceutil/trace.go:171","msg":"trace[2057084988] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"279.945079ms","start":"2024-07-03T23:59:07.122431Z","end":"2024-07-03T23:59:07.402376Z","steps":["trace[2057084988] 'range keys from in-memory index tree'  (duration: 279.31635ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:07.700787Z","caller":"traceutil/trace.go:171","msg":"trace[1439072368] linearizableReadLoop","detail":"{readStateIndex:483; appliedIndex:482; }","duration":"199.256758ms","start":"2024-07-03T23:59:07.501516Z","end":"2024-07-03T23:59:07.700773Z","steps":["trace[1439072368] 'read index received'  (duration: 199.123562ms)","trace[1439072368] 'applied index is now lower than readState.Index'  (duration: 132.776µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:59:07.701207Z","caller":"traceutil/trace.go:171","msg":"trace[1451915066] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"290.510416ms","start":"2024-07-03T23:59:07.41068Z","end":"2024-07-03T23:59:07.70119Z","steps":["trace[1451915066] 'process raft request'  (duration: 290.003906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.701291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.762363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-07-03T23:59:07.702697Z","caller":"traceutil/trace.go:171","msg":"trace[712141964] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:446; }","duration":"201.361876ms","start":"2024-07-03T23:59:07.501323Z","end":"2024-07-03T23:59:07.702685Z","steps":["trace[712141964] 'agreement among raft nodes before linearized reading'  (duration: 199.905075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.947423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.524229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-672261\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-03T23:59:07.947498Z","caller":"traceutil/trace.go:171","msg":"trace[1539846134] range","detail":"{range_begin:/registry/minions/pause-672261; range_end:; response_count:1; response_revision:446; }","duration":"239.633834ms","start":"2024-07-03T23:59:07.707852Z","end":"2024-07-03T23:59:07.947486Z","steps":["trace[1539846134] 'range keys from in-memory index tree'  (duration: 239.433922ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.947667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.809326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-07-03T23:59:07.94816Z","caller":"traceutil/trace.go:171","msg":"trace[2138599052] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:446; }","duration":"241.200944ms","start":"2024-07-03T23:59:07.706817Z","end":"2024-07-03T23:59:07.948018Z","steps":["trace[2138599052] 'range keys from in-memory index tree'  (duration: 240.742565ms)"],"step_count":1}
	
	
	==> etcd [5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce] <==
	{"level":"info","ts":"2024-07-03T23:58:47.012091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.01223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.01229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgPreVoteResp from c9a5eb5753c44688 at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.012332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became candidate at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgVoteResp from c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became leader at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9a5eb5753c44688 elected leader c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.014252Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9a5eb5753c44688","local-member-attributes":"{Name:pause-672261 ClientURLs:[https://192.168.61.246:2379]}","request-path":"/0/members/c9a5eb5753c44688/attributes","cluster-id":"f649e0b6c01be2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T23:58:47.014592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:58:47.017144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T23:58:47.017192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T23:58:47.017207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:58:47.017833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.246:2379"}
	{"level":"info","ts":"2024-07-03T23:58:47.021131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/07/03 23:58:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-03T23:58:50.033253Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-03T23:58:50.033317Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-672261","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.246:2380"],"advertise-client-urls":["https://192.168.61.246:2379"]}
	{"level":"warn","ts":"2024-07-03T23:58:50.033404Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.033431Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.035298Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.035388Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-03T23:58:50.035478Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9a5eb5753c44688","current-leader-member-id":"c9a5eb5753c44688"}
	{"level":"info","ts":"2024-07-03T23:58:50.038526Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.246:2380"}
	{"level":"info","ts":"2024-07-03T23:58:50.038656Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.246:2380"}
	{"level":"info","ts":"2024-07-03T23:58:50.038667Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-672261","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.246:2380"],"advertise-client-urls":["https://192.168.61.246:2379"]}
	
	
	==> kernel <==
	 23:59:15 up 1 min,  0 users,  load average: 0.82, 0.36, 0.14
	Linux pause-672261 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576] <==
	I0703 23:58:55.077913       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:58:55.078664       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:58:55.078985       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0703 23:58:55.079332       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:58:55.079728       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:58:55.079773       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:58:55.079797       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:58:55.079820       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:58:55.136456       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:58:55.138866       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:58:55.138924       1 policy_source.go:224] refreshing policies
	I0703 23:58:55.169898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:58:55.989495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 23:58:56.818521       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0703 23:58:56.836836       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:58:56.879567       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:58:56.921758       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 23:58:56.933668       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 23:59:06.991282       1 trace.go:236] Trace[206545184]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.246,type:*v1.Endpoints,resource:apiServerIPInfo (03-Jul-2024 23:59:06.385) (total time: 606ms):
	Trace[206545184]: ---"initial value restored" 214ms (23:59:06.599)
	Trace[206545184]: ---"Transaction prepared" 141ms (23:59:06.741)
	Trace[206545184]: ---"Txn call completed" 249ms (23:59:06.991)
	Trace[206545184]: [606.04016ms] [606.04016ms] END
	I0703 23:59:08.114392       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 23:59:08.228384       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0] <==
	E0703 23:58:48.749396       1 controller.go:123] "Will retry updating lease" err="failed 5 attempts to update lease" interval="10s"
	I0703 23:58:48.750909       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0703 23:58:48.751168       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:48.751258       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0703 23:58:48.751292       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0703 23:58:48.751329       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0703 23:58:48.751361       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0703 23:58:48.751379       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:48.752235       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0703 23:58:48.752333       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0703 23:58:48.752845       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0703 23:58:48.752901       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0703 23:58:48.753114       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 155.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	I0703 23:58:48.753294       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0703 23:58:48.757274       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0703 23:58:48.757657       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0703 23:58:48.757861       1 timeout.go:142] post-timeout activity - time-elapsed: 4.665486ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-2tttcgg3p3rkfk47loizpgnv64" result: <nil>
	I0703 23:58:48.760539       1 controller.go:157] Shutting down quota evaluator
	I0703 23:58:48.760695       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761244       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761256       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761720       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761973       1 controller.go:176] quota evaluator worker shutdown
	W0703 23:58:49.570936       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0703 23:58:49.571643       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-controller-manager [bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f] <==
	I0703 23:59:08.215348       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0703 23:59:08.221728       1 shared_informer.go:320] Caches are synced for TTL
	I0703 23:59:08.249368       1 shared_informer.go:320] Caches are synced for GC
	I0703 23:59:08.249608       1 shared_informer.go:320] Caches are synced for taint
	I0703 23:59:08.251693       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0703 23:59:08.252311       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-672261"
	I0703 23:59:08.253015       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0703 23:59:08.255167       1 shared_informer.go:320] Caches are synced for node
	I0703 23:59:08.255358       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0703 23:59:08.255473       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0703 23:59:08.255500       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0703 23:59:08.255573       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0703 23:59:08.292183       1 shared_informer.go:320] Caches are synced for namespace
	I0703 23:59:08.314134       1 shared_informer.go:320] Caches are synced for ephemeral
	I0703 23:59:08.317500       1 shared_informer.go:320] Caches are synced for service account
	I0703 23:59:08.318754       1 shared_informer.go:320] Caches are synced for stateful set
	I0703 23:59:08.352922       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:59:08.354363       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:59:08.358641       1 shared_informer.go:320] Caches are synced for expand
	I0703 23:59:08.358651       1 shared_informer.go:320] Caches are synced for attach detach
	I0703 23:59:08.367938       1 shared_informer.go:320] Caches are synced for persistent volume
	I0703 23:59:08.370581       1 shared_informer.go:320] Caches are synced for PVC protection
	I0703 23:59:08.777538       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:59:08.812249       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:59:08.812299       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500] <==
	I0703 23:58:46.243621       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:58:46.652651       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0703 23:58:46.652690       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:46.654435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0703 23:58:46.660188       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:46.660505       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:46.660651       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f] <==
	
	
	==> kube-proxy [9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c] <==
	I0703 23:58:56.051687       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:58:56.076720       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.246"]
	I0703 23:58:56.137510       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:58:56.137669       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:58:56.137775       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:58:56.141402       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:58:56.141592       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:58:56.141629       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:56.143353       1 config.go:192] "Starting service config controller"
	I0703 23:58:56.143380       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:58:56.143419       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:58:56.143424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:58:56.143752       1 config.go:319] "Starting node config controller"
	I0703 23:58:56.143781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:58:56.243508       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:58:56.243611       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:58:56.243866       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4] <==
	I0703 23:58:46.857469       1 serving.go:380] Generated self-signed cert in-memory
	W0703 23:58:48.587752       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 23:58:48.588020       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:58:48.588141       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 23:58:48.588166       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 23:58:48.630759       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:58:48.631259       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:48.638208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:48.638288       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:58:48.638326       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:48.638357       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:48.638576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:58:48.638654       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:48.638857       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0703 23:58:48.638960       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0703 23:58:48.641433       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0703 23:58:48.641592       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d] <==
	I0703 23:58:53.325261       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:58:55.099769       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:58:55.099894       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:55.104404       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:58:55.104819       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:55.104891       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0703 23:58:55.105119       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0703 23:58:55.104822       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:55.105293       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:55.104744       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0703 23:58:55.105473       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0703 23:58:55.206223       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0703 23:58:55.206351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:55.206224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 03 23:58:51 pause-672261 kubelet[3728]: E0703 23:58:51.760214    3728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-672261?timeout=10s\": dial tcp 192.168.61.246:8443: connect: connection refused" interval="400ms"
	Jul 03 23:58:51 pause-672261 kubelet[3728]: I0703 23:58:51.845286    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:51 pause-672261 kubelet[3728]: E0703 23:58:51.846167    3728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.246:8443: connect: connection refused" node="pause-672261"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.031990    3728 scope.go:117] "RemoveContainer" containerID="9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.033106    3728 scope.go:117] "RemoveContainer" containerID="5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.034308    3728 scope.go:117] "RemoveContainer" containerID="c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.035413    3728 scope.go:117] "RemoveContainer" containerID="c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: E0703 23:58:52.161543    3728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-672261?timeout=10s\": dial tcp 192.168.61.246:8443: connect: connection refused" interval="800ms"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.247681    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: E0703 23:58:52.249126    3728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.246:8443: connect: connection refused" node="pause-672261"
	Jul 03 23:58:53 pause-672261 kubelet[3728]: I0703 23:58:53.052265    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.223296    3728 kubelet_node_status.go:112] "Node was previously registered" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.223687    3728 kubelet_node_status.go:76] "Successfully registered node" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.225691    3728 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.226800    3728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: E0703 23:58:55.347857    3728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-672261\" already exists" pod="kube-system/etcd-pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.536454    3728 apiserver.go:52] "Watching apiserver"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.539984    3728 topology_manager.go:215] "Topology Admit Handler" podUID="ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2" podNamespace="kube-system" podName="kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.540286    3728 topology_manager.go:215] "Topology Admit Handler" podUID="9b7401eb-5d71-440b-ac85-f1a3ab07de21" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sr5wm"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.541590    3728 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.565601    3728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2-xtables-lock\") pod \"kube-proxy-mwcv2\" (UID: \"ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2\") " pod="kube-system/kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.565868    3728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2-lib-modules\") pod \"kube-proxy-mwcv2\" (UID: \"ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2\") " pod="kube-system/kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.841438    3728 scope.go:117] "RemoveContainer" containerID="e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.842951    3728 scope.go:117] "RemoveContainer" containerID="7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f"
	Jul 03 23:59:00 pause-672261 kubelet[3728]: I0703 23:59:00.399898    3728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-672261 -n pause-672261
helpers_test.go:261: (dbg) Run:  kubectl --context pause-672261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-672261 -n pause-672261
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-672261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-672261 logs -n 25: (1.480826043s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo docker                         | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo cat                            | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo                                | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo find                           | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-676605 sudo crio                           | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-676605                                     | cilium-676605             | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p cert-expiration-979438                            | cert-expiration-979438    | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-175902                          | force-systemd-env-175902  | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| stop    | -p kubernetes-upgrade-652205                         | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC | 03 Jul 24 23:58 UTC |
	| start   | -p force-systemd-flag-163167                         | force-systemd-flag-163167 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652205                         | kubernetes-upgrade-652205 | jenkins | v1.33.1 | 03 Jul 24 23:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:58:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:58:50.255305   57609 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:58:50.255548   57609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:50.255556   57609 out.go:304] Setting ErrFile to fd 2...
	I0703 23:58:50.255560   57609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:50.255757   57609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:58:50.256333   57609 out.go:298] Setting JSON to false
	I0703 23:58:50.257285   57609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6070,"bootTime":1720045060,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:58:50.257348   57609 start.go:139] virtualization: kvm guest
	I0703 23:58:50.259291   57609 out.go:177] * [kubernetes-upgrade-652205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:58:50.260569   57609 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:58:50.260615   57609 notify.go:220] Checking for updates...
	I0703 23:58:50.262769   57609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:58:50.264100   57609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:58:50.265328   57609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:50.266659   57609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:58:50.268004   57609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:58:50.269501   57609 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0703 23:58:50.269882   57609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:50.269959   57609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:50.286433   57609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0703 23:58:50.286860   57609 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:50.287388   57609 main.go:141] libmachine: Using API Version  1
	I0703 23:58:50.287431   57609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:50.287734   57609 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:50.287925   57609 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:58:50.288150   57609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:58:50.288545   57609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:50.288588   57609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:50.304288   57609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0703 23:58:50.304738   57609 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:50.305226   57609 main.go:141] libmachine: Using API Version  1
	I0703 23:58:50.305253   57609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:50.305671   57609 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:50.305864   57609 main.go:141] libmachine: (kubernetes-upgrade-652205) Calling .DriverName
	I0703 23:58:50.345713   57609 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:58:50.346998   57609 start.go:297] selected driver: kvm2
	I0703 23:58:50.347020   57609 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:50.347162   57609 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:58:50.348147   57609 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:50.348251   57609 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:58:50.366097   57609 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:58:50.366482   57609 cni.go:84] Creating CNI manager for ""
	I0703 23:58:50.366499   57609 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:50.366538   57609 start.go:340] cluster config:
	{Name:kubernetes-upgrade-652205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-652205 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:58:50.366629   57609 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:58:50.368443   57609 out.go:177] * Starting "kubernetes-upgrade-652205" primary control-plane node in "kubernetes-upgrade-652205" cluster
	I0703 23:58:50.212962   54840 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f 9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4 c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0 c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500 5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9 fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 3c28c7469f35fde19ccf21cb3fe36d58839c03905c69bf409b736ee05f88c8af 2fab2daa3f0fc91e1924357505664423588e3dd99aeded4182030e642c7e10e9 ccb3edb62ffb109e42cd37f4ae966ccbd8df86fff432d077be51cd225a623d9e: (4.31929901s)
	W0703 23:58:50.213047   54840 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f 9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4 c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0 c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500 5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9 fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 3c28c7469f35fde19ccf21cb3fe36d58839c03905c69bf409b736ee05f88c8af 2fab2daa3f0fc91e1924357505664423588e3dd99aeded4182030e642c7e10e9 ccb3edb62ffb109e42cd37f4ae966ccbd8df86fff432d077be51cd225a623d9e: Process exited with status 1
	stdout:
	7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f
	9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4
	c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0
	c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500
	5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce
	e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9
	fb0d52d57abf1e2ca5643f14de857dd42a2e8bae81c0903cabe97827556fc4ab
	
	stderr:
	E0703 23:58:50.204329    3568 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": container with ID starting with 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 not found: ID does not exist" containerID="33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100"
	time="2024-07-03T23:58:50Z" level=fatal msg="stopping the container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": rpc error: code = NotFound desc = could not find container \"33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100\": container with ID starting with 33f5d01f128e771070627b3977fb04e1df424248d48bfe3336acc4fe0311f100 not found: ID does not exist"
	I0703 23:58:50.213126   54840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0703 23:58:50.258329   54840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:58:50.270832   54840 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jul  3 23:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul  3 23:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul  3 23:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul  3 23:57 /etc/kubernetes/scheduler.conf
	
	I0703 23:58:50.270891   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:58:50.281804   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:58:50.293328   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:58:50.304354   54840 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:58:50.304407   54840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:58:50.315691   54840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:58:50.326950   54840 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:58:50.327004   54840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:58:50.338615   54840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:58:50.350637   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:50.415255   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.186754   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:53.365008   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:53.365486   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | unable to find current IP address of domain cert-expiration-979438 in network mk-cert-expiration-979438
	I0703 23:58:53.365501   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | I0703 23:58:53.365445   57214 retry.go:31] will retry after 4.152673896s: waiting for machine to come up
	I0703 23:58:50.188237   57566 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:50.188304   57566 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:58:50.188325   57566 cache.go:56] Caching tarball of preloaded images
	I0703 23:58:50.188409   57566 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:58:50.188423   57566 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:58:50.188557   57566 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/force-systemd-flag-163167/config.json ...
	I0703 23:58:50.188582   57566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/force-systemd-flag-163167/config.json: {Name:mkdfedbba126f12ae6877bcd088b88b4996c2b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:58:50.188744   57566 start.go:360] acquireMachinesLock for force-systemd-flag-163167: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:58:50.369661   57609 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:58:50.369706   57609 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 23:58:50.369713   57609 cache.go:56] Caching tarball of preloaded images
	I0703 23:58:50.369775   57609 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:58:50.369785   57609 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0703 23:58:50.369866   57609 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kubernetes-upgrade-652205/config.json ...
	I0703 23:58:50.370043   57609 start.go:360] acquireMachinesLock for kubernetes-upgrade-652205: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 23:58:51.402741   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.475786   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:51.611347   54840 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:58:51.611426   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.111629   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.612162   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:58:52.628500   54840 api_server.go:72] duration metric: took 1.017162229s to wait for apiserver process to appear ...
	I0703 23:58:52.628538   54840 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:58:52.628563   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.024599   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 23:58:55.024642   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 23:58:55.024663   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.086472   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.086508   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:55.128628   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.134918   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.134956   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:55.629027   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:55.637469   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:55.637504   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:56.129073   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:56.136781   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 23:58:56.136813   54840 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 23:58:56.629565   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:58:56.633950   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0703 23:58:56.640448   54840 api_server.go:141] control plane version: v1.30.2
	I0703 23:58:56.640479   54840 api_server.go:131] duration metric: took 4.011932639s to wait for apiserver health ...
	I0703 23:58:56.640498   54840 cni.go:84] Creating CNI manager for ""
	I0703 23:58:56.640507   54840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:58:56.642579   54840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0703 23:58:58.961057   57566 start.go:364] duration metric: took 8.772289686s to acquireMachinesLock for "force-systemd-flag-163167"
	I0703 23:58:58.961130   57566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163167 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-163167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:58:58.961244   57566 start.go:125] createHost starting for "" (driver="kvm2")
	I0703 23:58:57.521446   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.522021   57192 main.go:141] libmachine: (cert-expiration-979438) Found IP for machine: 192.168.50.228
	I0703 23:58:57.522048   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has current primary IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.522071   57192 main.go:141] libmachine: (cert-expiration-979438) Reserving static IP address...
	I0703 23:58:57.522389   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | unable to find host DHCP lease matching {name: "cert-expiration-979438", mac: "52:54:00:0a:40:ca", ip: "192.168.50.228"} in network mk-cert-expiration-979438
	I0703 23:58:57.601318   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Getting to WaitForSSH function...
	I0703 23:58:57.601338   57192 main.go:141] libmachine: (cert-expiration-979438) Reserved static IP address: 192.168.50.228
	I0703 23:58:57.601350   57192 main.go:141] libmachine: (cert-expiration-979438) Waiting for SSH to be available...
	I0703 23:58:57.603838   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.604364   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.604395   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.604541   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using SSH client type: external
	I0703 23:58:57.604564   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa (-rw-------)
	I0703 23:58:57.604601   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0703 23:58:57.604609   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | About to run SSH command:
	I0703 23:58:57.604620   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | exit 0
	I0703 23:58:57.727977   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | SSH cmd err, output: <nil>: 
	I0703 23:58:57.728299   57192 main.go:141] libmachine: (cert-expiration-979438) KVM machine creation complete!
	I0703 23:58:57.728624   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetConfigRaw
	I0703 23:58:57.729113   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:57.729294   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:57.729452   57192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0703 23:58:57.729460   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetState
	I0703 23:58:57.730739   57192 main.go:141] libmachine: Detecting operating system of created instance...
	I0703 23:58:57.730745   57192 main.go:141] libmachine: Waiting for SSH to be available...
	I0703 23:58:57.730750   57192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0703 23:58:57.730754   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.733147   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.733508   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.733526   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.733693   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.733874   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.734035   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.734145   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.734292   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.734472   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.734477   57192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0703 23:58:57.835479   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:58:57.835489   57192 main.go:141] libmachine: Detecting the provisioner...
	I0703 23:58:57.835494   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.838401   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.838725   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.838758   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.838912   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.839115   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.839307   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.839451   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.839579   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.839733   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.839738   57192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0703 23:58:57.940728   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0703 23:58:57.940812   57192 main.go:141] libmachine: found compatible host: buildroot
	I0703 23:58:57.940820   57192 main.go:141] libmachine: Provisioning with buildroot...
	I0703 23:58:57.940828   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:57.941103   57192 buildroot.go:166] provisioning hostname "cert-expiration-979438"
	I0703 23:58:57.941124   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:57.941324   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:57.944183   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.944538   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:57.944560   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:57.944665   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:57.944842   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.944975   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:57.945077   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:57.945218   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:57.945405   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:57.945412   57192 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-979438 && echo "cert-expiration-979438" | sudo tee /etc/hostname
	I0703 23:58:58.058668   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-979438
	
	I0703 23:58:58.058689   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.061654   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.062005   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.062024   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.062261   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.062416   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.062561   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.062662   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.062803   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.062969   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.062980   57192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-979438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-979438/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-979438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:58:58.173654   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:58:58.173673   57192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0703 23:58:58.173705   57192 buildroot.go:174] setting up certificates
	I0703 23:58:58.173712   57192 provision.go:84] configureAuth start
	I0703 23:58:58.173720   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetMachineName
	I0703 23:58:58.174031   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.176553   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.176870   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.176879   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.177014   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.179106   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.179457   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.179478   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.179619   57192 provision.go:143] copyHostCerts
	I0703 23:58:58.179671   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0703 23:58:58.179677   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0703 23:58:58.179740   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0703 23:58:58.179812   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0703 23:58:58.179815   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0703 23:58:58.179836   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0703 23:58:58.179908   57192 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0703 23:58:58.179913   57192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0703 23:58:58.179938   57192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0703 23:58:58.180007   57192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-979438 san=[127.0.0.1 192.168.50.228 cert-expiration-979438 localhost minikube]
	I0703 23:58:58.285208   57192 provision.go:177] copyRemoteCerts
	I0703 23:58:58.285251   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:58:58.285272   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.287802   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.288094   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.288108   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.288310   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.288486   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.288631   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.288789   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.370497   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0703 23:58:58.397471   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0703 23:58:58.423490   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0703 23:58:58.449712   57192 provision.go:87] duration metric: took 275.987438ms to configureAuth
	I0703 23:58:58.449731   57192 buildroot.go:189] setting minikube options for container-runtime
	I0703 23:58:58.449892   57192 config.go:182] Loaded profile config "cert-expiration-979438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:58.449949   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.452798   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.453120   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.453142   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.453327   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.453529   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.453697   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.453873   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.454080   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.454284   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.454298   57192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0703 23:58:58.723696   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0703 23:58:58.723710   57192 main.go:141] libmachine: Checking connection to Docker...
	I0703 23:58:58.723734   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetURL
	I0703 23:58:58.724945   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | Using libvirt version 6000000
	I0703 23:58:58.726991   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.727371   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.727387   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.727585   57192 main.go:141] libmachine: Docker is up and running!
	I0703 23:58:58.727593   57192 main.go:141] libmachine: Reticulating splines...
	I0703 23:58:58.727598   57192 client.go:171] duration metric: took 24.012516368s to LocalClient.Create
	I0703 23:58:58.727616   57192 start.go:167] duration metric: took 24.012564811s to libmachine.API.Create "cert-expiration-979438"
	I0703 23:58:58.727622   57192 start.go:293] postStartSetup for "cert-expiration-979438" (driver="kvm2")
	I0703 23:58:58.727630   57192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 23:58:58.727642   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.727887   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 23:58:58.727908   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.729885   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.730192   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.730203   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.730399   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.730593   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.730726   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.730835   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.811372   57192 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 23:58:58.815919   57192 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 23:58:58.815939   57192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0703 23:58:58.816001   57192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0703 23:58:58.816102   57192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0703 23:58:58.816206   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0703 23:58:58.827061   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:58:58.853187   57192 start.go:296] duration metric: took 125.55416ms for postStartSetup
	I0703 23:58:58.853222   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetConfigRaw
	I0703 23:58:58.853831   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.856556   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.856958   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.856981   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.857318   57192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/config.json ...
	I0703 23:58:58.857554   57192 start.go:128] duration metric: took 24.160347157s to createHost
	I0703 23:58:58.857579   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.859573   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.859839   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.859857   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.860004   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.860204   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.860365   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.860505   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.860641   57192 main.go:141] libmachine: Using SSH client type: native
	I0703 23:58:58.860797   57192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0703 23:58:58.860805   57192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 23:58:58.960928   57192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051138.935558754
	
	I0703 23:58:58.960952   57192 fix.go:216] guest clock: 1720051138.935558754
	I0703 23:58:58.960957   57192 fix.go:229] Guest: 2024-07-03 23:58:58.935558754 +0000 UTC Remote: 2024-07-03 23:58:58.857563509 +0000 UTC m=+24.262469996 (delta=77.995245ms)
	I0703 23:58:58.960974   57192 fix.go:200] guest clock delta is within tolerance: 77.995245ms
	I0703 23:58:58.960977   57192 start.go:83] releasing machines lock for "cert-expiration-979438", held for 24.263843573s
	I0703 23:58:58.961001   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.961328   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:58:58.964292   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.964675   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.964694   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.964879   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965384   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965569   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .DriverName
	I0703 23:58:58.965652   57192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 23:58:58.965693   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.965767   57192 ssh_runner.go:195] Run: cat /version.json
	I0703 23:58:58.965786   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHHostname
	I0703 23:58:58.968515   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.968826   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.968948   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.968969   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.969183   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.969260   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:58:58.969281   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:58:58.969433   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHPort
	I0703 23:58:58.969468   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.969579   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.969586   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHKeyPath
	I0703 23:58:58.969717   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetSSHUsername
	I0703 23:58:58.969727   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:58.969855   57192 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/cert-expiration-979438/id_rsa Username:docker}
	I0703 23:58:59.078807   57192 ssh_runner.go:195] Run: systemctl --version
	I0703 23:58:59.085212   57192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0703 23:58:59.251350   57192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 23:58:59.258049   57192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 23:58:59.258114   57192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 23:58:59.275129   57192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0703 23:58:59.275143   57192 start.go:494] detecting cgroup driver to use...
	I0703 23:58:59.275205   57192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 23:58:59.292357   57192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 23:58:59.306476   57192 docker.go:217] disabling cri-docker service (if available) ...
	I0703 23:58:59.306530   57192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0703 23:58:59.322626   57192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0703 23:58:59.337180   57192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0703 23:58:59.466808   57192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0703 23:58:59.625879   57192 docker.go:233] disabling docker service ...
	I0703 23:58:59.625942   57192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0703 23:58:58.963651   57566 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0703 23:58:58.963889   57566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:58:58.963953   57566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:58:58.982624   57566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0703 23:58:58.983070   57566 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:58:58.983668   57566 main.go:141] libmachine: Using API Version  1
	I0703 23:58:58.983693   57566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:58:58.984083   57566 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:58:58.984302   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .GetMachineName
	I0703 23:58:58.984477   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .DriverName
	I0703 23:58:58.984654   57566 start.go:159] libmachine.API.Create for "force-systemd-flag-163167" (driver="kvm2")
	I0703 23:58:58.984688   57566 client.go:168] LocalClient.Create starting
	I0703 23:58:58.984720   57566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0703 23:58:58.984762   57566 main.go:141] libmachine: Decoding PEM data...
	I0703 23:58:58.984784   57566 main.go:141] libmachine: Parsing certificate...
	I0703 23:58:58.984860   57566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0703 23:58:58.984895   57566 main.go:141] libmachine: Decoding PEM data...
	I0703 23:58:58.984914   57566 main.go:141] libmachine: Parsing certificate...
	I0703 23:58:58.984934   57566 main.go:141] libmachine: Running pre-create checks...
	I0703 23:58:58.984943   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .PreCreateCheck
	I0703 23:58:58.985375   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .GetConfigRaw
	I0703 23:58:58.985822   57566 main.go:141] libmachine: Creating machine...
	I0703 23:58:58.985840   57566 main.go:141] libmachine: (force-systemd-flag-163167) Calling .Create
	I0703 23:58:58.986016   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating KVM machine...
	I0703 23:58:58.987434   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | found existing default KVM network
	I0703 23:58:58.988855   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.988572   57703 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:41:7f} reservation:<nil>}
	I0703 23:58:58.989641   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.989561   57703 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:85:96} reservation:<nil>}
	I0703 23:58:58.990597   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.990512   57703 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:6f:e3} reservation:<nil>}
	I0703 23:58:58.991589   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:58.991507   57703 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00032d590}
	I0703 23:58:58.991610   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | created network xml: 
	I0703 23:58:58.991623   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | <network>
	I0703 23:58:58.991632   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <name>mk-force-systemd-flag-163167</name>
	I0703 23:58:58.991643   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <dns enable='no'/>
	I0703 23:58:58.991654   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   
	I0703 23:58:58.991662   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0703 23:58:58.991671   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |     <dhcp>
	I0703 23:58:58.991691   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0703 23:58:58.991712   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |     </dhcp>
	I0703 23:58:58.991725   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   </ip>
	I0703 23:58:58.991749   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG |   
	I0703 23:58:58.991761   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | </network>
	I0703 23:58:58.991771   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | 
	I0703 23:58:58.997355   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | trying to create private KVM network mk-force-systemd-flag-163167 192.168.72.0/24...
	I0703 23:58:59.073036   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | private KVM network mk-force-systemd-flag-163167 192.168.72.0/24 created
	I0703 23:58:59.073066   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.072982   57703 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:59.073080   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 ...
	I0703 23:58:59.073098   57566 main.go:141] libmachine: (force-systemd-flag-163167) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 23:58:59.073289   57566 main.go:141] libmachine: (force-systemd-flag-163167) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0703 23:58:59.314364   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.314217   57703 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/id_rsa...
	I0703 23:58:59.399285   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.399112   57703 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/force-systemd-flag-163167.rawdisk...
	I0703 23:58:59.399323   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Writing magic tar header
	I0703 23:58:59.399342   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Writing SSH key tar header
	I0703 23:58:59.399357   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:58:59.399231   57703 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 ...
	I0703 23:58:59.399372   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167 (perms=drwx------)
	I0703 23:58:59.399389   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167
	I0703 23:58:59.399401   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0703 23:58:59.399414   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:59.399429   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0703 23:58:59.399439   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0703 23:58:59.399450   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home/jenkins
	I0703 23:58:59.399488   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0703 23:58:59.399497   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Checking permissions on dir: /home
	I0703 23:58:59.399513   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | Skipping /home - not owner
	I0703 23:58:59.399528   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0703 23:58:59.399541   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0703 23:58:59.399556   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0703 23:58:59.399568   57566 main.go:141] libmachine: (force-systemd-flag-163167) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0703 23:58:59.399645   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating domain...
	I0703 23:58:59.400682   57566 main.go:141] libmachine: (force-systemd-flag-163167) define libvirt domain using xml: 
	I0703 23:58:59.400708   57566 main.go:141] libmachine: (force-systemd-flag-163167) <domain type='kvm'>
	I0703 23:58:59.400727   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <name>force-systemd-flag-163167</name>
	I0703 23:58:59.400738   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <memory unit='MiB'>2048</memory>
	I0703 23:58:59.400745   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <vcpu>2</vcpu>
	I0703 23:58:59.400755   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <features>
	I0703 23:58:59.400788   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <acpi/>
	I0703 23:58:59.400811   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <apic/>
	I0703 23:58:59.400821   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <pae/>
	I0703 23:58:59.400836   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.400846   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </features>
	I0703 23:58:59.400858   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <cpu mode='host-passthrough'>
	I0703 23:58:59.400880   57566 main.go:141] libmachine: (force-systemd-flag-163167)   
	I0703 23:58:59.400891   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </cpu>
	I0703 23:58:59.400899   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <os>
	I0703 23:58:59.400911   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <type>hvm</type>
	I0703 23:58:59.400920   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <boot dev='cdrom'/>
	I0703 23:58:59.400931   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <boot dev='hd'/>
	I0703 23:58:59.400945   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <bootmenu enable='no'/>
	I0703 23:58:59.400955   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </os>
	I0703 23:58:59.400963   57566 main.go:141] libmachine: (force-systemd-flag-163167)   <devices>
	I0703 23:58:59.400975   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <disk type='file' device='cdrom'>
	I0703 23:58:59.401004   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/boot2docker.iso'/>
	I0703 23:58:59.401028   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target dev='hdc' bus='scsi'/>
	I0703 23:58:59.401045   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <readonly/>
	I0703 23:58:59.401061   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </disk>
	I0703 23:58:59.401077   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <disk type='file' device='disk'>
	I0703 23:58:59.401089   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0703 23:58:59.401102   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/force-systemd-flag-163167/force-systemd-flag-163167.rawdisk'/>
	I0703 23:58:59.401112   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target dev='hda' bus='virtio'/>
	I0703 23:58:59.401122   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </disk>
	I0703 23:58:59.401135   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <interface type='network'>
	I0703 23:58:59.401149   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source network='mk-force-systemd-flag-163167'/>
	I0703 23:58:59.401167   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <model type='virtio'/>
	I0703 23:58:59.401178   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </interface>
	I0703 23:58:59.401185   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <interface type='network'>
	I0703 23:58:59.401192   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <source network='default'/>
	I0703 23:58:59.401201   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <model type='virtio'/>
	I0703 23:58:59.401213   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </interface>
	I0703 23:58:59.401223   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <serial type='pty'>
	I0703 23:58:59.401234   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target port='0'/>
	I0703 23:58:59.401246   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </serial>
	I0703 23:58:59.401260   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <console type='pty'>
	I0703 23:58:59.401272   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <target type='serial' port='0'/>
	I0703 23:58:59.401281   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </console>
	I0703 23:58:59.401287   57566 main.go:141] libmachine: (force-systemd-flag-163167)     <rng model='virtio'>
	I0703 23:58:59.401297   57566 main.go:141] libmachine: (force-systemd-flag-163167)       <backend model='random'>/dev/random</backend>
	I0703 23:58:59.401309   57566 main.go:141] libmachine: (force-systemd-flag-163167)     </rng>
	I0703 23:58:59.401316   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.401328   57566 main.go:141] libmachine: (force-systemd-flag-163167)     
	I0703 23:58:59.401342   57566 main.go:141] libmachine: (force-systemd-flag-163167)   </devices>
	I0703 23:58:59.401353   57566 main.go:141] libmachine: (force-systemd-flag-163167) </domain>
	I0703 23:58:59.401362   57566 main.go:141] libmachine: (force-systemd-flag-163167) 
	I0703 23:58:59.405781   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:d0:8f:95 in network default
	I0703 23:58:59.406383   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring networks are active...
	I0703 23:58:59.406416   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:58:59.407082   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring network default is active
	I0703 23:58:59.407396   57566 main.go:141] libmachine: (force-systemd-flag-163167) Ensuring network mk-force-systemd-flag-163167 is active
	I0703 23:58:59.408133   57566 main.go:141] libmachine: (force-systemd-flag-163167) Getting domain xml...
	I0703 23:58:59.408940   57566 main.go:141] libmachine: (force-systemd-flag-163167) Creating domain...
	I0703 23:58:59.641908   57192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0703 23:58:59.656324   57192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0703 23:58:59.776567   57192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0703 23:58:59.885999   57192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0703 23:58:59.900557   57192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 23:58:59.921211   57192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0703 23:58:59.921261   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.933116   57192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0703 23:58:59.933173   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.945348   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.957311   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.971208   57192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 23:58:59.985477   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:58:59.997747   57192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:59:00.017657   57192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0703 23:59:00.030004   57192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 23:59:00.042606   57192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0703 23:59:00.042657   57192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0703 23:59:00.059516   57192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 23:59:00.071318   57192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:00.207164   57192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0703 23:59:00.356819   57192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0703 23:59:00.356904   57192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0703 23:59:00.363142   57192 start.go:562] Will wait 60s for crictl version
	I0703 23:59:00.363185   57192 ssh_runner.go:195] Run: which crictl
	I0703 23:59:00.367814   57192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 23:59:00.417887   57192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0703 23:59:00.417949   57192 ssh_runner.go:195] Run: crio --version
	I0703 23:59:00.455339   57192 ssh_runner.go:195] Run: crio --version
	I0703 23:59:00.492828   57192 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0703 23:58:56.644203   54840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0703 23:58:56.655181   54840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0703 23:58:56.675668   54840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:58:56.685835   54840 system_pods.go:59] 6 kube-system pods found
	I0703 23:58:56.685878   54840 system_pods.go:61] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0703 23:58:56.685899   54840 system_pods.go:61] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0703 23:58:56.685912   54840 system_pods.go:61] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0703 23:58:56.685922   54840 system_pods.go:61] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0703 23:58:56.685934   54840 system_pods.go:61] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0703 23:58:56.685943   54840 system_pods.go:61] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0703 23:58:56.685954   54840 system_pods.go:74] duration metric: took 10.264366ms to wait for pod list to return data ...
	I0703 23:58:56.685966   54840 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:58:56.689513   54840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:58:56.689551   54840 node_conditions.go:123] node cpu capacity is 2
	I0703 23:58:56.689562   54840 node_conditions.go:105] duration metric: took 3.587693ms to run NodePressure ...
	I0703 23:58:56.689583   54840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 23:58:56.956340   54840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0703 23:58:56.961857   54840 kubeadm.go:733] kubelet initialised
	I0703 23:58:56.961882   54840 kubeadm.go:734] duration metric: took 5.516107ms waiting for restarted kubelet to initialise ...
	I0703 23:58:56.961889   54840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:58:56.967400   54840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:58:58.976088   54840 pod_ready.go:102] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:00.477618   54840 pod_ready.go:92] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:00.477648   54840 pod_ready.go:81] duration metric: took 3.510211849s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:00.477661   54840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:00.493984   57192 main.go:141] libmachine: (cert-expiration-979438) Calling .GetIP
	I0703 23:59:00.496931   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:59:00.497382   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:40:ca", ip: ""} in network mk-cert-expiration-979438: {Iface:virbr2 ExpiryTime:2024-07-04 00:58:49 +0000 UTC Type:0 Mac:52:54:00:0a:40:ca Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:cert-expiration-979438 Clientid:01:52:54:00:0a:40:ca}
	I0703 23:59:00.497404   57192 main.go:141] libmachine: (cert-expiration-979438) DBG | domain cert-expiration-979438 has defined IP address 192.168.50.228 and MAC address 52:54:00:0a:40:ca in network mk-cert-expiration-979438
	I0703 23:59:00.497650   57192 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0703 23:59:00.502422   57192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:59:00.517913   57192 kubeadm.go:877] updating cluster {Name:cert-expiration-979438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 23:59:00.518027   57192 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 23:59:00.518076   57192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:59:00.553082   57192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0703 23:59:00.553142   57192 ssh_runner.go:195] Run: which lz4
	I0703 23:59:00.557593   57192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0703 23:59:00.562226   57192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0703 23:59:00.562262   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0703 23:59:02.138094   57192 crio.go:462] duration metric: took 1.580554394s to copy over tarball
	I0703 23:59:02.138172   57192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0703 23:59:04.462141   57192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323934507s)
	I0703 23:59:04.462158   57192 crio.go:469] duration metric: took 2.324044467s to extract the tarball
	I0703 23:59:04.462164   57192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0703 23:59:04.502953   57192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0703 23:59:04.558820   57192 crio.go:514] all images are preloaded for cri-o runtime.
	I0703 23:59:04.558832   57192 cache_images.go:84] Images are preloaded, skipping loading
	I0703 23:59:04.558841   57192 kubeadm.go:928] updating node { 192.168.50.228 8443 v1.30.2 crio true true} ...
	I0703 23:59:04.558957   57192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-979438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 23:59:04.559029   57192 ssh_runner.go:195] Run: crio config
	I0703 23:59:04.611916   57192 cni.go:84] Creating CNI manager for ""
	I0703 23:59:04.611927   57192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:59:04.611936   57192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 23:59:04.611955   57192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-979438 NodeName:cert-expiration-979438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 23:59:04.612083   57192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-979438"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 23:59:04.612133   57192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 23:59:04.624575   57192 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 23:59:04.624643   57192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 23:59:00.707946   57566 main.go:141] libmachine: (force-systemd-flag-163167) Waiting to get IP...
	I0703 23:59:00.708961   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:00.709504   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:00.709530   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:00.709479   57703 retry.go:31] will retry after 267.946778ms: waiting for machine to come up
	I0703 23:59:00.978994   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:00.979670   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:00.979698   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:00.979571   57703 retry.go:31] will retry after 345.324288ms: waiting for machine to come up
	I0703 23:59:01.326285   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:01.326874   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:01.326906   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:01.326833   57703 retry.go:31] will retry after 419.499046ms: waiting for machine to come up
	I0703 23:59:01.748567   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:01.749174   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:01.749203   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:01.749124   57703 retry.go:31] will retry after 556.137903ms: waiting for machine to come up
	I0703 23:59:02.306918   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:02.307506   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:02.307537   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:02.307460   57703 retry.go:31] will retry after 649.917041ms: waiting for machine to come up
	I0703 23:59:02.959445   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:02.960038   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:02.960084   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:02.960003   57703 retry.go:31] will retry after 794.895153ms: waiting for machine to come up
	I0703 23:59:03.757110   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:03.757541   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:03.757582   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:03.757482   57703 retry.go:31] will retry after 776.803905ms: waiting for machine to come up
	I0703 23:59:04.535906   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:04.536431   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:04.536467   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:04.536378   57703 retry.go:31] will retry after 1.082302032s: waiting for machine to come up
	I0703 23:59:04.637249   57192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0703 23:59:04.656771   57192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 23:59:04.679226   57192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0703 23:59:04.700519   57192 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0703 23:59:04.704927   57192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0703 23:59:04.718495   57192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:04.841871   57192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:59:04.859582   57192 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438 for IP: 192.168.50.228
	I0703 23:59:04.859595   57192 certs.go:194] generating shared ca certs ...
	I0703 23:59:04.859611   57192 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.859765   57192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0703 23:59:04.859816   57192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0703 23:59:04.859822   57192 certs.go:256] generating profile certs ...
	I0703 23:59:04.859901   57192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key
	I0703 23:59:04.859915   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt with IP's: []
	I0703 23:59:04.945336   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt ...
	I0703 23:59:04.945351   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.crt: {Name:mk399cb2dae8105ed1902e0f47263082fe5df105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.945517   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key ...
	I0703 23:59:04.945524   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/client.key: {Name:mkee6fbeada8ea37945b7ad3f308e3c43c86f1ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:04.945613   57192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043
	I0703 23:59:04.945629   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.228]
	I0703 23:59:05.026039   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 ...
	I0703 23:59:05.026052   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043: {Name:mkb96166ec96153be78e8225d04bd1f39244918a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.026221   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043 ...
	I0703 23:59:05.026231   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043: {Name:mk58b6d72e36e55b25d24dee639b49b1c44909aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.026331   57192 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt.a46cd043 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt
	I0703 23:59:05.026440   57192 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key.a46cd043 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key
	I0703 23:59:05.026490   57192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key
	I0703 23:59:05.026509   57192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt with IP's: []
	I0703 23:59:05.094930   57192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt ...
	I0703 23:59:05.094945   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt: {Name:mk416436ec02cd70be288c16c8458158d55d19b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.095115   57192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key ...
	I0703 23:59:05.095126   57192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key: {Name:mk3dc972ebe25f725c53ad4ddd6fe0a0001e44ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:05.095319   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0703 23:59:05.095352   57192 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0703 23:59:05.095358   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0703 23:59:05.095378   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0703 23:59:05.095399   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0703 23:59:05.095417   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0703 23:59:05.095449   57192 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0703 23:59:05.096016   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 23:59:05.126388   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 23:59:05.153824   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 23:59:05.180713   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0703 23:59:05.207459   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0703 23:59:05.233606   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0703 23:59:05.260548   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 23:59:05.287767   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/cert-expiration-979438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0703 23:59:05.319759   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0703 23:59:05.350262   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 23:59:05.379006   57192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0703 23:59:05.405308   57192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 23:59:05.425155   57192 ssh_runner.go:195] Run: openssl version
	I0703 23:59:05.431716   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0703 23:59:05.443776   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.448814   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.448870   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0703 23:59:05.455108   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0703 23:59:05.466618   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0703 23:59:05.479147   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.484646   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.484706   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0703 23:59:05.493090   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 23:59:05.504780   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 23:59:05.516485   57192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.521346   57192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.521422   57192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 23:59:05.527885   57192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 23:59:05.540541   57192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 23:59:05.544976   57192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0703 23:59:05.545025   57192 kubeadm.go:391] StartCluster: {Name:cert-expiration-979438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:cert-expiration-979438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:59:05.545103   57192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0703 23:59:05.545153   57192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0703 23:59:05.593863   57192 cri.go:89] found id: ""
	I0703 23:59:05.593920   57192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0703 23:59:05.608699   57192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 23:59:05.619514   57192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 23:59:05.630907   57192 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0703 23:59:05.630918   57192 kubeadm.go:156] found existing configuration files:
	
	I0703 23:59:05.630970   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0703 23:59:05.646026   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0703 23:59:05.646082   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0703 23:59:05.657909   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0703 23:59:05.672230   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0703 23:59:05.672285   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0703 23:59:05.689629   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0703 23:59:05.705297   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0703 23:59:05.705354   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 23:59:05.721441   57192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0703 23:59:05.740447   57192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0703 23:59:05.740494   57192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 23:59:05.753170   57192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0703 23:59:05.887687   57192 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0703 23:59:05.887756   57192 kubeadm.go:309] [preflight] Running pre-flight checks
	I0703 23:59:06.016203   57192 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0703 23:59:06.016340   57192 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0703 23:59:06.016482   57192 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0703 23:59:06.253973   57192 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0703 23:59:02.485248   54840 pod_ready.go:102] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:04.985945   54840 pod_ready.go:102] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:06.127479   54840 pod_ready.go:92] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:06.127513   54840 pod_ready.go:81] duration metric: took 5.64984387s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:06.127526   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:06.381241   57192 out.go:204]   - Generating certificates and keys ...
	I0703 23:59:06.381381   57192 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0703 23:59:06.381481   57192 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0703 23:59:06.381584   57192 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0703 23:59:06.758975   57192 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0703 23:59:06.994052   57192 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0703 23:59:07.065551   57192 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0703 23:59:07.234326   57192 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0703 23:59:07.234606   57192 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-979438 localhost] and IPs [192.168.50.228 127.0.0.1 ::1]
	I0703 23:59:07.349792   57192 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0703 23:59:07.350114   57192 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-979438 localhost] and IPs [192.168.50.228 127.0.0.1 ::1]
	I0703 23:59:07.423173   57192 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0703 23:59:07.534649   57192 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0703 23:59:07.870471   57192 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0703 23:59:07.870727   57192 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0703 23:59:08.027173   57192 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0703 23:59:08.327411   57192 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0703 23:59:08.646463   57192 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0703 23:59:08.719933   57192 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0703 23:59:08.994429   57192 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0703 23:59:08.995194   57192 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0703 23:59:08.998531   57192 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0703 23:59:09.000372   57192 out.go:204]   - Booting up control plane ...
	I0703 23:59:09.000533   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0703 23:59:09.000631   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0703 23:59:09.000721   57192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0703 23:59:09.018888   57192 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0703 23:59:09.021117   57192 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0703 23:59:09.021444   57192 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0703 23:59:09.192671   57192 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0703 23:59:09.192803   57192 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0703 23:59:05.620585   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:05.621090   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:05.621120   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:05.621037   57703 retry.go:31] will retry after 1.318972199s: waiting for machine to come up
	I0703 23:59:06.942425   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:06.942966   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:06.942996   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:06.942901   57703 retry.go:31] will retry after 1.562982371s: waiting for machine to come up
	I0703 23:59:08.507923   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:08.508467   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:08.508496   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:08.508394   57703 retry.go:31] will retry after 2.243399611s: waiting for machine to come up
	I0703 23:59:08.165090   54840 pod_ready.go:102] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"False"
	I0703 23:59:09.136886   54840 pod_ready.go:92] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:09.136915   54840 pod_ready.go:81] duration metric: took 3.009380676s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:09.136930   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.145543   54840 pod_ready.go:92] pod "kube-controller-manager-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.145577   54840 pod_ready.go:81] duration metric: took 2.008637058s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.145592   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.152306   54840 pod_ready.go:92] pod "kube-proxy-mwcv2" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.152334   54840 pod_ready.go:81] duration metric: took 6.732867ms for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.152345   54840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.158254   54840 pod_ready.go:92] pod "kube-scheduler-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.158280   54840 pod_ready.go:81] duration metric: took 5.926224ms for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.158290   54840 pod_ready.go:38] duration metric: took 14.196391498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:11.158310   54840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 23:59:11.174513   54840 ops.go:34] apiserver oom_adj: -16
	I0703 23:59:11.174538   54840 kubeadm.go:591] duration metric: took 25.376911693s to restartPrimaryControlPlane
	I0703 23:59:11.174551   54840 kubeadm.go:393] duration metric: took 25.510117163s to StartCluster
	I0703 23:59:11.174571   54840 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:11.174653   54840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:59:11.175547   54840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:11.175822   54840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0703 23:59:11.175903   54840 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 23:59:11.176087   54840 config.go:182] Loaded profile config "pause-672261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:59:11.177603   54840 out.go:177] * Enabled addons: 
	I0703 23:59:11.177621   54840 out.go:177] * Verifying Kubernetes components...
	I0703 23:59:11.178771   54840 addons.go:510] duration metric: took 2.89453ms for enable addons: enabled=[]
	I0703 23:59:11.178808   54840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 23:59:11.350953   54840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 23:59:11.376486   54840 node_ready.go:35] waiting up to 6m0s for node "pause-672261" to be "Ready" ...
	I0703 23:59:11.380948   54840 node_ready.go:49] node "pause-672261" has status "Ready":"True"
	I0703 23:59:11.380973   54840 node_ready.go:38] duration metric: took 4.447717ms for node "pause-672261" to be "Ready" ...
	I0703 23:59:11.380985   54840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:11.387275   54840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.397099   54840 pod_ready.go:92] pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.397138   54840 pod_ready.go:81] duration metric: took 9.825349ms for pod "coredns-7db6d8ff4d-sr5wm" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.397152   54840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.541284   54840 pod_ready.go:92] pod "etcd-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.541318   54840 pod_ready.go:81] duration metric: took 144.157412ms for pod "etcd-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.541331   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.941803   54840 pod_ready.go:92] pod "kube-apiserver-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:11.941846   54840 pod_ready.go:81] duration metric: took 400.497476ms for pod "kube-apiserver-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:11.941861   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.342069   54840 pod_ready.go:92] pod "kube-controller-manager-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:12.342104   54840 pod_ready.go:81] duration metric: took 400.233619ms for pod "kube-controller-manager-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.342118   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.742488   54840 pod_ready.go:92] pod "kube-proxy-mwcv2" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:12.742511   54840 pod_ready.go:81] duration metric: took 400.386168ms for pod "kube-proxy-mwcv2" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:12.742521   54840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:13.142120   54840 pod_ready.go:92] pod "kube-scheduler-pause-672261" in "kube-system" namespace has status "Ready":"True"
	I0703 23:59:13.142148   54840 pod_ready.go:81] duration metric: took 399.617536ms for pod "kube-scheduler-pause-672261" in "kube-system" namespace to be "Ready" ...
	I0703 23:59:13.142158   54840 pod_ready.go:38] duration metric: took 1.761161854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 23:59:13.142184   54840 api_server.go:52] waiting for apiserver process to appear ...
	I0703 23:59:13.142248   54840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:59:13.157984   54840 api_server.go:72] duration metric: took 1.982129135s to wait for apiserver process to appear ...
	I0703 23:59:13.158007   54840 api_server.go:88] waiting for apiserver healthz status ...
	I0703 23:59:13.158025   54840 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0703 23:59:13.163209   54840 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0703 23:59:13.164528   54840 api_server.go:141] control plane version: v1.30.2
	I0703 23:59:13.164552   54840 api_server.go:131] duration metric: took 6.538504ms to wait for apiserver health ...
	I0703 23:59:13.164563   54840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 23:59:13.345519   54840 system_pods.go:59] 6 kube-system pods found
	I0703 23:59:13.345545   54840 system_pods.go:61] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running
	I0703 23:59:13.345549   54840 system_pods.go:61] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running
	I0703 23:59:13.345553   54840 system_pods.go:61] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running
	I0703 23:59:13.345556   54840 system_pods.go:61] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running
	I0703 23:59:13.345559   54840 system_pods.go:61] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running
	I0703 23:59:13.345563   54840 system_pods.go:61] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running
	I0703 23:59:13.345569   54840 system_pods.go:74] duration metric: took 180.998586ms to wait for pod list to return data ...
	I0703 23:59:13.345576   54840 default_sa.go:34] waiting for default service account to be created ...
	I0703 23:59:13.542349   54840 default_sa.go:45] found service account: "default"
	I0703 23:59:13.542381   54840 default_sa.go:55] duration metric: took 196.797051ms for default service account to be created ...
	I0703 23:59:13.542393   54840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 23:59:13.746323   54840 system_pods.go:86] 6 kube-system pods found
	I0703 23:59:13.746369   54840 system_pods.go:89] "coredns-7db6d8ff4d-sr5wm" [9b7401eb-5d71-440b-ac85-f1a3ab07de21] Running
	I0703 23:59:13.746377   54840 system_pods.go:89] "etcd-pause-672261" [aca0565e-222d-4d64-8728-4153b71d62ff] Running
	I0703 23:59:13.746382   54840 system_pods.go:89] "kube-apiserver-pause-672261" [afe4e4f4-5332-4b1c-8022-04ba3af294d8] Running
	I0703 23:59:13.746387   54840 system_pods.go:89] "kube-controller-manager-pause-672261" [5939effa-c3e4-4c39-96b0-b61dd91d441b] Running
	I0703 23:59:13.746393   54840 system_pods.go:89] "kube-proxy-mwcv2" [ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2] Running
	I0703 23:59:13.746398   54840 system_pods.go:89] "kube-scheduler-pause-672261" [173110a4-77a8-4fa1-8af3-5fef2d7fb7c3] Running
	I0703 23:59:13.746406   54840 system_pods.go:126] duration metric: took 204.007379ms to wait for k8s-apps to be running ...
	I0703 23:59:13.746414   54840 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 23:59:13.746464   54840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:59:13.761982   54840 system_svc.go:56] duration metric: took 15.557058ms WaitForService to wait for kubelet
	I0703 23:59:13.762013   54840 kubeadm.go:576] duration metric: took 2.586161436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:59:13.762052   54840 node_conditions.go:102] verifying NodePressure condition ...
	I0703 23:59:13.941957   54840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 23:59:13.941990   54840 node_conditions.go:123] node cpu capacity is 2
	I0703 23:59:13.942003   54840 node_conditions.go:105] duration metric: took 179.944115ms to run NodePressure ...
	I0703 23:59:13.942016   54840 start.go:240] waiting for startup goroutines ...
	I0703 23:59:13.942041   54840 start.go:245] waiting for cluster config update ...
	I0703 23:59:13.942061   54840 start.go:254] writing updated cluster config ...
	I0703 23:59:13.942389   54840 ssh_runner.go:195] Run: rm -f paused
	I0703 23:59:13.992086   54840 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 23:59:13.993610   54840 out.go:177] * Done! kubectl is now configured to use "pause-672261" cluster and "default" namespace by default
	I0703 23:59:09.705114   57192 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 512.460121ms
	I0703 23:59:09.705207   57192 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0703 23:59:10.753196   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:10.753729   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:10.753755   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:10.753681   57703 retry.go:31] will retry after 3.058120731s: waiting for machine to come up
	I0703 23:59:13.815869   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | domain force-systemd-flag-163167 has defined MAC address 52:54:00:45:7f:dc in network mk-force-systemd-flag-163167
	I0703 23:59:13.816367   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | unable to find current IP address of domain force-systemd-flag-163167 in network mk-force-systemd-flag-163167
	I0703 23:59:13.816390   57566 main.go:141] libmachine: (force-systemd-flag-163167) DBG | I0703 23:59:13.816323   57703 retry.go:31] will retry after 4.487596991s: waiting for machine to come up
	I0703 23:59:15.208880   57192 kubeadm.go:309] [api-check] The API server is healthy after 5.506369224s
	I0703 23:59:15.225644   57192 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0703 23:59:15.242068   57192 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0703 23:59:15.274219   57192 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0703 23:59:15.274483   57192 kubeadm.go:309] [mark-control-plane] Marking the node cert-expiration-979438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0703 23:59:15.288774   57192 kubeadm.go:309] [bootstrap-token] Using token: zak19g.y4eidxbtncb6htmh
	I0703 23:59:15.289944   57192 out.go:204]   - Configuring RBAC rules ...
	I0703 23:59:15.290086   57192 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0703 23:59:15.302246   57192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0703 23:59:15.310547   57192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0703 23:59:15.314735   57192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0703 23:59:15.318926   57192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0703 23:59:15.323110   57192 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0703 23:59:15.616317   57192 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0703 23:59:16.127644   57192 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0703 23:59:16.618508   57192 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0703 23:59:16.620085   57192 kubeadm.go:309] 
	I0703 23:59:16.620165   57192 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0703 23:59:16.620175   57192 kubeadm.go:309] 
	I0703 23:59:16.620240   57192 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0703 23:59:16.620243   57192 kubeadm.go:309] 
	I0703 23:59:16.620263   57192 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0703 23:59:16.620312   57192 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0703 23:59:16.620354   57192 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0703 23:59:16.620356   57192 kubeadm.go:309] 
	I0703 23:59:16.620395   57192 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0703 23:59:16.620398   57192 kubeadm.go:309] 
	I0703 23:59:16.620445   57192 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0703 23:59:16.620448   57192 kubeadm.go:309] 
	I0703 23:59:16.620511   57192 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0703 23:59:16.620593   57192 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0703 23:59:16.620671   57192 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0703 23:59:16.620676   57192 kubeadm.go:309] 
	I0703 23:59:16.620792   57192 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0703 23:59:16.620872   57192 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0703 23:59:16.620875   57192 kubeadm.go:309] 
	I0703 23:59:16.620999   57192 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zak19g.y4eidxbtncb6htmh \
	I0703 23:59:16.621108   57192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0703 23:59:16.621129   57192 kubeadm.go:309] 	--control-plane 
	I0703 23:59:16.621134   57192 kubeadm.go:309] 
	I0703 23:59:16.621238   57192 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0703 23:59:16.621243   57192 kubeadm.go:309] 
	I0703 23:59:16.621335   57192 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zak19g.y4eidxbtncb6htmh \
	I0703 23:59:16.621490   57192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0703 23:59:16.622645   57192 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0703 23:59:16.622665   57192 cni.go:84] Creating CNI manager for ""
	I0703 23:59:16.622672   57192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:59:16.624100   57192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.044600520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051157044577806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=072e6760-2675-40c3-b68d-3b34f731f56a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.045167017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5acb0224-35c7-4771-a361-3e23f088c67e name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.045244195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5acb0224-35c7-4771-a361-3e23f088c67e name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.045483367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5acb0224-35c7-4771-a361-3e23f088c67e name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.095188494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84c0e0f5-2360-41f3-bafe-6e503d60ea2e name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.095285163Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84c0e0f5-2360-41f3-bafe-6e503d60ea2e name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.096388730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1d00f5d-d168-4f06-952e-65046401e9a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.096768935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051157096743903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1d00f5d-d168-4f06-952e-65046401e9a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.097462329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db6e626-1fa9-4351-b0d6-5e7db53de3ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.097517449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db6e626-1fa9-4351-b0d6-5e7db53de3ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.097762929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db6e626-1fa9-4351-b0d6-5e7db53de3ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.147360850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0728e35c-ab27-4a87-bf6a-654a8736e22d name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.147466367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0728e35c-ab27-4a87-bf6a-654a8736e22d name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.148839844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bc096b8-ba0b-450b-a860-a562416d3d2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.149544365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051157149514876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bc096b8-ba0b-450b-a860-a562416d3d2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.150510540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2414b5c3-ac67-4129-b35a-1e57767778f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.150585697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2414b5c3-ac67-4129-b35a-1e57767778f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.150898404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2414b5c3-ac67-4129-b35a-1e57767778f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.209791839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=849c2d66-bbb1-4fdb-8747-e6fdf96b6fc4 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.209919187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=849c2d66-bbb1-4fdb-8747-e6fdf96b6fc4 name=/runtime.v1.RuntimeService/Version
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.211674398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df65a2e1-a88c-4284-8a24-1c1257ff3e1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.212505677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720051157212463820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df65a2e1-a88c-4284-8a24-1c1257ff3e1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.213364987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a78a52cc-57a7-4e66-b046-ea640bdec961 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.213446770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a78a52cc-57a7-4e66-b046-ea640bdec961 name=/runtime.v1.RuntimeService/ListContainers
	Jul 03 23:59:17 pause-672261 crio[2732]: time="2024-07-03 23:59:17.213694903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051135870573606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65ab,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0,PodSandboxId:69cb978a908bb5ce808b99c93843a4c835833d30831b86be230173341ea98739,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051135857373905,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051132091263806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051132081223807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
67f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051132063252982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507
a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051132055088524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.
kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f,PodSandboxId:049c993bc44f68f0782b69d22cbdf421a15c58a5dc1f890f067a11841ef49709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720051124793520400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwcv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2,},Annotations:map[string]string{io.kubernetes.container.hash: df2a65a
b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4,PodSandboxId:79145a37df3dee229cb1e73d81c60e1270b73b51dd5325ae2bcd553bac7b4c9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720051124639605405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c053fd1ff0fcfc507a0469ecf0ca23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.cont
ainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0,PodSandboxId:0646f7df6749b7cfa30f0e1a8b25f70b8af80f2f920ace0ab6f9751e9d577cb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051124578971070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167f9ef2fae22af1888be0a8dc1afcc1,},Annotations:map[string]string{io.kubernetes.container.hash: bff1e266,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce,PodSandboxId:919f68f8d50d379d1287e1b1042b10f2e88e9c14b94690ee8b8dd0a020e84c55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720051124361987731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0751c40357ae22a4b6fae5c7806f18d4,},Annotations:map[string]string{io.kubernetes.container.hash: 11bcebf4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500,PodSandboxId:421a1a187b300c1a8d4318ae13d61c8040d675d6c54205e589eac77ca898eaae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720051124462695459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a228042c0b7cb0706b8ad93d94fda8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9,PodSandboxId:998921b1a491cbd279d4a4485de1817288fb83eb4f870ef2c8bed44c66139b90,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720051112169229320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sr5wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7401eb-5d71-440b-ac85-f1a3ab07de21,},Annotations:map[string]string{io.kubernetes.container.hash: 99a783cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a78a52cc-57a7-4e66-b046-ea640bdec961 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a2f3ab0edd24       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   21 seconds ago      Running             kube-proxy                3                   049c993bc44f6       kube-proxy-mwcv2
	0da5d6426287b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   69cb978a908bb       coredns-7db6d8ff4d-sr5wm
	bd38fef524a48       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   25 seconds ago      Running             kube-controller-manager   3                   421a1a187b300       kube-controller-manager-pause-672261
	6ec9605f6c153       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   25 seconds ago      Running             kube-apiserver            3                   0646f7df6749b       kube-apiserver-pause-672261
	a6b6112031159       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   25 seconds ago      Running             kube-scheduler            3                   79145a37df3de       kube-scheduler-pause-672261
	44fe9aae4f119       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      3                   919f68f8d50d3       etcd-pause-672261
	7580dd19096e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   32 seconds ago      Exited              kube-proxy                2                   049c993bc44f6       kube-proxy-mwcv2
	9c33b979b20f2       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   32 seconds ago      Exited              kube-scheduler            2                   79145a37df3de       kube-scheduler-pause-672261
	c938b9c08ac5e       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   32 seconds ago      Exited              kube-apiserver            2                   0646f7df6749b       kube-apiserver-pause-672261
	c051fb73177f2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   32 seconds ago      Exited              kube-controller-manager   2                   421a1a187b300       kube-controller-manager-pause-672261
	5083f2257a546       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   32 seconds ago      Exited              etcd                      2                   919f68f8d50d3       etcd-pause-672261
	e655f54b5cf30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago      Exited              coredns                   1                   998921b1a491c       coredns-7db6d8ff4d-sr5wm
	
	
	==> coredns [0da5d6426287b94cb7ed898d436f418e18bd9e76a2738a5688c6b9834dfefda0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38700 - 46158 "HINFO IN 8105696747826790080.5218847225836727451. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014222341s
	
	
	==> coredns [e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59315 - 61730 "HINFO IN 1562211779905980592.447755102183035168. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013974878s
	
	
	==> describe nodes <==
	Name:               pause-672261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-672261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=pause-672261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T23_57_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 23:57:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-672261
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 23:59:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 23:58:55 +0000   Wed, 03 Jul 2024 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.246
	  Hostname:    pause-672261
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a6b8b1627dd40d68bc0fedd947da1a8
	  System UUID:                1a6b8b16-27dd-40d6-8bc0-fedd947da1a8
	  Boot ID:                    17106a09-9eae-4375-80ae-fcb34e510ff1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sr5wm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     70s
	  kube-system                 etcd-pause-672261                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-pause-672261             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-pause-672261    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-mwcv2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-pause-672261             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s                kubelet          Node pause-672261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet          Node pause-672261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet          Node pause-672261 status is now: NodeHasSufficientPID
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeReady                83s                kubelet          Node pause-672261 status is now: NodeReady
	  Normal  RegisteredNode           71s                node-controller  Node pause-672261 event: Registered Node pause-672261 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-672261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-672261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-672261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-672261 event: Registered Node pause-672261 in Controller
	
	
	==> dmesg <==
	[  +0.076638] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.219354] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.155156] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.370924] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.713468] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.071849] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.146415] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +1.245203] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.644001] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.080501] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.237493] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 3 23:58] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[ +23.116315] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.523332] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.305404] systemd-fstab-generator[2518]: Ignoring "noauto" option for root device
	[  +0.354249] systemd-fstab-generator[2579]: Ignoring "noauto" option for root device
	[  +0.271746] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[  +0.504848] systemd-fstab-generator[2715]: Ignoring "noauto" option for root device
	[ +11.099685] systemd-fstab-generator[2982]: Ignoring "noauto" option for root device
	[  +0.094353] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.278302] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.161808] systemd-fstab-generator[3721]: Ignoring "noauto" option for root device
	[  +4.636638] kauditd_printk_skb: 47 callbacks suppressed
	[Jul 3 23:59] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.990236] systemd-fstab-generator[4194]: Ignoring "noauto" option for root device
	
	
	==> etcd [44fe9aae4f1190710a4463d895f2997fe66073037c12b7657990aacaa99ac387] <==
	{"level":"info","ts":"2024-07-03T23:59:06.110467Z","caller":"traceutil/trace.go:171","msg":"trace[519604106] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-672261; range_end:; response_count:1; response_revision:444; }","duration":"138.443599ms","start":"2024-07-03T23:59:05.972013Z","end":"2024-07-03T23:59:06.110456Z","steps":["trace[519604106] 'agreement among raft nodes before linearized reading'  (duration: 138.337584ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:06.110711Z","caller":"traceutil/trace.go:171","msg":"trace[1034398405] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"439.712927ms","start":"2024-07-03T23:59:05.670987Z","end":"2024-07-03T23:59:06.1107Z","steps":["trace[1034398405] 'process raft request'  (duration: 439.102893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:06.110815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-03T23:59:05.670963Z","time spent":"439.793311ms","remote":"127.0.0.1:51820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5477,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-672261\" mod_revision:387 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-672261\" value_size:5425 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-672261\" > >"}
	{"level":"warn","ts":"2024-07-03T23:59:06.598885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.092176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.246\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-03T23:59:06.599002Z","caller":"traceutil/trace.go:171","msg":"trace[1645983398] range","detail":"{range_begin:/registry/masterleases/192.168.61.246; range_end:; response_count:1; response_revision:444; }","duration":"213.19869ms","start":"2024-07-03T23:59:06.385733Z","end":"2024-07-03T23:59:06.598931Z","steps":["trace[1645983398] 'range keys from in-memory index tree'  (duration: 212.956706ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:06.740677Z","caller":"traceutil/trace.go:171","msg":"trace[1574541755] linearizableReadLoop","detail":"{readStateIndex:481; appliedIndex:480; }","duration":"117.316923ms","start":"2024-07-03T23:59:06.623345Z","end":"2024-07-03T23:59:06.740662Z","steps":["trace[1574541755] 'read index received'  (duration: 117.148864ms)","trace[1574541755] 'applied index is now lower than readState.Index'  (duration: 167.119µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:59:06.740787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.428557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-672261\" ","response":"range_response_count:1 size:7003"}
	{"level":"info","ts":"2024-07-03T23:59:06.740806Z","caller":"traceutil/trace.go:171","msg":"trace[1628249616] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-672261; range_end:; response_count:1; response_revision:444; }","duration":"117.491385ms","start":"2024-07-03T23:59:06.623309Z","end":"2024-07-03T23:59:06.740801Z","steps":["trace[1628249616] 'agreement among raft nodes before linearized reading'  (duration: 117.424391ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:06.990229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.791643ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5082471037600801522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.246\" mod_revision:393 > success:<request_put:<key:\"/registry/masterleases/192.168.61.246\" value_size:67 lease:5082471037600801519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.246\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-03T23:59:06.990436Z","caller":"traceutil/trace.go:171","msg":"trace[825486895] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"245.85587ms","start":"2024-07-03T23:59:06.744565Z","end":"2024-07-03T23:59:06.990421Z","steps":["trace[825486895] 'read index received'  (duration: 119.709156ms)","trace[825486895] 'applied index is now lower than readState.Index'  (duration: 126.142992ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:59:06.990528Z","caller":"traceutil/trace.go:171","msg":"trace[838349317] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"248.765132ms","start":"2024-07-03T23:59:06.741743Z","end":"2024-07-03T23:59:06.990508Z","steps":["trace[838349317] 'process raft request'  (duration: 122.446551ms)","trace[838349317] 'compare'  (duration: 125.621395ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T23:59:06.990648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.091299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-672261\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-03T23:59:06.990711Z","caller":"traceutil/trace.go:171","msg":"trace[1337703407] range","detail":"{range_begin:/registry/minions/pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"246.184764ms","start":"2024-07-03T23:59:06.744517Z","end":"2024-07-03T23:59:06.990702Z","steps":["trace[1337703407] 'agreement among raft nodes before linearized reading'  (duration: 245.982294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.401729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.568844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-672261\" ","response":"range_response_count:1 size:4565"}
	{"level":"info","ts":"2024-07-03T23:59:07.401905Z","caller":"traceutil/trace.go:171","msg":"trace[1563463524] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"223.781155ms","start":"2024-07-03T23:59:07.178113Z","end":"2024-07-03T23:59:07.401894Z","steps":["trace[1563463524] 'range keys from in-memory index tree'  (duration: 223.460695ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.401869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.404275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-672261\" ","response":"range_response_count:1 size:7003"}
	{"level":"info","ts":"2024-07-03T23:59:07.402385Z","caller":"traceutil/trace.go:171","msg":"trace[2057084988] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-672261; range_end:; response_count:1; response_revision:445; }","duration":"279.945079ms","start":"2024-07-03T23:59:07.122431Z","end":"2024-07-03T23:59:07.402376Z","steps":["trace[2057084988] 'range keys from in-memory index tree'  (duration: 279.31635ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T23:59:07.700787Z","caller":"traceutil/trace.go:171","msg":"trace[1439072368] linearizableReadLoop","detail":"{readStateIndex:483; appliedIndex:482; }","duration":"199.256758ms","start":"2024-07-03T23:59:07.501516Z","end":"2024-07-03T23:59:07.700773Z","steps":["trace[1439072368] 'read index received'  (duration: 199.123562ms)","trace[1439072368] 'applied index is now lower than readState.Index'  (duration: 132.776µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T23:59:07.701207Z","caller":"traceutil/trace.go:171","msg":"trace[1451915066] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"290.510416ms","start":"2024-07-03T23:59:07.41068Z","end":"2024-07-03T23:59:07.70119Z","steps":["trace[1451915066] 'process raft request'  (duration: 290.003906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.701291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.762363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-07-03T23:59:07.702697Z","caller":"traceutil/trace.go:171","msg":"trace[712141964] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:446; }","duration":"201.361876ms","start":"2024-07-03T23:59:07.501323Z","end":"2024-07-03T23:59:07.702685Z","steps":["trace[712141964] 'agreement among raft nodes before linearized reading'  (duration: 199.905075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.947423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.524229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-672261\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-03T23:59:07.947498Z","caller":"traceutil/trace.go:171","msg":"trace[1539846134] range","detail":"{range_begin:/registry/minions/pause-672261; range_end:; response_count:1; response_revision:446; }","duration":"239.633834ms","start":"2024-07-03T23:59:07.707852Z","end":"2024-07-03T23:59:07.947486Z","steps":["trace[1539846134] 'range keys from in-memory index tree'  (duration: 239.433922ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T23:59:07.947667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.809326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-07-03T23:59:07.94816Z","caller":"traceutil/trace.go:171","msg":"trace[2138599052] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:446; }","duration":"241.200944ms","start":"2024-07-03T23:59:07.706817Z","end":"2024-07-03T23:59:07.948018Z","steps":["trace[2138599052] 'range keys from in-memory index tree'  (duration: 240.742565ms)"],"step_count":1}
	
	
	==> etcd [5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce] <==
	{"level":"info","ts":"2024-07-03T23:58:47.012091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.01223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.01229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgPreVoteResp from c9a5eb5753c44688 at term 2"}
	{"level":"info","ts":"2024-07-03T23:58:47.012332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became candidate at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgVoteResp from c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became leader at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.012469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9a5eb5753c44688 elected leader c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-03T23:58:47.014252Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9a5eb5753c44688","local-member-attributes":"{Name:pause-672261 ClientURLs:[https://192.168.61.246:2379]}","request-path":"/0/members/c9a5eb5753c44688/attributes","cluster-id":"f649e0b6c01be2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T23:58:47.014592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:58:47.017144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T23:58:47.017192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T23:58:47.017207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T23:58:47.017833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.246:2379"}
	{"level":"info","ts":"2024-07-03T23:58:47.021131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/07/03 23:58:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-03T23:58:50.033253Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-03T23:58:50.033317Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-672261","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.246:2380"],"advertise-client-urls":["https://192.168.61.246:2379"]}
	{"level":"warn","ts":"2024-07-03T23:58:50.033404Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.033431Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.035298Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T23:58:50.035388Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-03T23:58:50.035478Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9a5eb5753c44688","current-leader-member-id":"c9a5eb5753c44688"}
	{"level":"info","ts":"2024-07-03T23:58:50.038526Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.246:2380"}
	{"level":"info","ts":"2024-07-03T23:58:50.038656Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.246:2380"}
	{"level":"info","ts":"2024-07-03T23:58:50.038667Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-672261","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.246:2380"],"advertise-client-urls":["https://192.168.61.246:2379"]}
	
	
	==> kernel <==
	 23:59:17 up 1 min,  0 users,  load average: 0.82, 0.36, 0.14
	Linux pause-672261 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6ec9605f6c153b5ca6c125367fafd9970bb84b2363910e7fa2122f505a50e576] <==
	I0703 23:58:55.077913       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 23:58:55.078664       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 23:58:55.078985       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0703 23:58:55.079332       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 23:58:55.079728       1 aggregator.go:165] initial CRD sync complete...
	I0703 23:58:55.079773       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 23:58:55.079797       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 23:58:55.079820       1 cache.go:39] Caches are synced for autoregister controller
	I0703 23:58:55.136456       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 23:58:55.138866       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 23:58:55.138924       1 policy_source.go:224] refreshing policies
	I0703 23:58:55.169898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 23:58:55.989495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 23:58:56.818521       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0703 23:58:56.836836       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 23:58:56.879567       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 23:58:56.921758       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 23:58:56.933668       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 23:59:06.991282       1 trace.go:236] Trace[206545184]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.246,type:*v1.Endpoints,resource:apiServerIPInfo (03-Jul-2024 23:59:06.385) (total time: 606ms):
	Trace[206545184]: ---"initial value restored" 214ms (23:59:06.599)
	Trace[206545184]: ---"Transaction prepared" 141ms (23:59:06.741)
	Trace[206545184]: ---"Txn call completed" 249ms (23:59:06.991)
	Trace[206545184]: [606.04016ms] [606.04016ms] END
	I0703 23:59:08.114392       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 23:59:08.228384       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0] <==
	E0703 23:58:48.749396       1 controller.go:123] "Will retry updating lease" err="failed 5 attempts to update lease" interval="10s"
	I0703 23:58:48.750909       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0703 23:58:48.751168       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:48.751258       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0703 23:58:48.751292       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0703 23:58:48.751329       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0703 23:58:48.751361       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0703 23:58:48.751379       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:48.752235       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0703 23:58:48.752333       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0703 23:58:48.752845       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0703 23:58:48.752901       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0703 23:58:48.753114       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 155.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	I0703 23:58:48.753294       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0703 23:58:48.757274       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0703 23:58:48.757657       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0703 23:58:48.757861       1 timeout.go:142] post-timeout activity - time-elapsed: 4.665486ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-2tttcgg3p3rkfk47loizpgnv64" result: <nil>
	I0703 23:58:48.760539       1 controller.go:157] Shutting down quota evaluator
	I0703 23:58:48.760695       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761244       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761256       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761720       1 controller.go:176] quota evaluator worker shutdown
	I0703 23:58:48.761973       1 controller.go:176] quota evaluator worker shutdown
	W0703 23:58:49.570936       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0703 23:58:49.571643       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-controller-manager [bd38fef524a488bdebc66021b178e98588666ebb95859ce13e148b0a1079282f] <==
	I0703 23:59:08.215348       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0703 23:59:08.221728       1 shared_informer.go:320] Caches are synced for TTL
	I0703 23:59:08.249368       1 shared_informer.go:320] Caches are synced for GC
	I0703 23:59:08.249608       1 shared_informer.go:320] Caches are synced for taint
	I0703 23:59:08.251693       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0703 23:59:08.252311       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-672261"
	I0703 23:59:08.253015       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0703 23:59:08.255167       1 shared_informer.go:320] Caches are synced for node
	I0703 23:59:08.255358       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0703 23:59:08.255473       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0703 23:59:08.255500       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0703 23:59:08.255573       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0703 23:59:08.292183       1 shared_informer.go:320] Caches are synced for namespace
	I0703 23:59:08.314134       1 shared_informer.go:320] Caches are synced for ephemeral
	I0703 23:59:08.317500       1 shared_informer.go:320] Caches are synced for service account
	I0703 23:59:08.318754       1 shared_informer.go:320] Caches are synced for stateful set
	I0703 23:59:08.352922       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:59:08.354363       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 23:59:08.358641       1 shared_informer.go:320] Caches are synced for expand
	I0703 23:59:08.358651       1 shared_informer.go:320] Caches are synced for attach detach
	I0703 23:59:08.367938       1 shared_informer.go:320] Caches are synced for persistent volume
	I0703 23:59:08.370581       1 shared_informer.go:320] Caches are synced for PVC protection
	I0703 23:59:08.777538       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:59:08.812249       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 23:59:08.812299       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500] <==
	I0703 23:58:46.243621       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:58:46.652651       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0703 23:58:46.652690       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:46.654435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0703 23:58:46.660188       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:46.660505       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 23:58:46.660651       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f] <==
	
	
	==> kube-proxy [9a2f3ab0edd24f1ccaae2b0559368a3d39cb960d55b0e9c2dce6fc9d58b6b10c] <==
	I0703 23:58:56.051687       1 server_linux.go:69] "Using iptables proxy"
	I0703 23:58:56.076720       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.246"]
	I0703 23:58:56.137510       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 23:58:56.137669       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 23:58:56.137775       1 server_linux.go:165] "Using iptables Proxier"
	I0703 23:58:56.141402       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 23:58:56.141592       1 server.go:872] "Version info" version="v1.30.2"
	I0703 23:58:56.141629       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:56.143353       1 config.go:192] "Starting service config controller"
	I0703 23:58:56.143380       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 23:58:56.143419       1 config.go:101] "Starting endpoint slice config controller"
	I0703 23:58:56.143424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 23:58:56.143752       1 config.go:319] "Starting node config controller"
	I0703 23:58:56.143781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 23:58:56.243508       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 23:58:56.243611       1 shared_informer.go:320] Caches are synced for service config
	I0703 23:58:56.243866       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4] <==
	I0703 23:58:46.857469       1 serving.go:380] Generated self-signed cert in-memory
	W0703 23:58:48.587752       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 23:58:48.588020       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 23:58:48.588141       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 23:58:48.588166       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 23:58:48.630759       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:58:48.631259       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:48.638208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:48.638288       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 23:58:48.638326       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:48.638357       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:48.638576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:58:48.638654       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:48.638857       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0703 23:58:48.638960       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0703 23:58:48.641433       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0703 23:58:48.641592       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a6b6112031159bb3d610ddd89f8128b4df4459509810094b0bc5fdb2f34aa20d] <==
	I0703 23:58:53.325261       1 serving.go:380] Generated self-signed cert in-memory
	I0703 23:58:55.099769       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 23:58:55.099894       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 23:58:55.104404       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 23:58:55.104819       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 23:58:55.104891       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0703 23:58:55.105119       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0703 23:58:55.104822       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 23:58:55.105293       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:55.104744       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0703 23:58:55.105473       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0703 23:58:55.206223       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0703 23:58:55.206351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 23:58:55.206224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 03 23:58:51 pause-672261 kubelet[3728]: E0703 23:58:51.760214    3728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-672261?timeout=10s\": dial tcp 192.168.61.246:8443: connect: connection refused" interval="400ms"
	Jul 03 23:58:51 pause-672261 kubelet[3728]: I0703 23:58:51.845286    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:51 pause-672261 kubelet[3728]: E0703 23:58:51.846167    3728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.246:8443: connect: connection refused" node="pause-672261"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.031990    3728 scope.go:117] "RemoveContainer" containerID="9c33b979b20f24726f0e533d046fcc39df3993b40283d1d086266d97d5bc34a4"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.033106    3728 scope.go:117] "RemoveContainer" containerID="5083f2257a546bfd070658447229d9a46babe2aa2575ac4125da0983feb5d4ce"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.034308    3728 scope.go:117] "RemoveContainer" containerID="c938b9c08ac5e4edee6355982405f2f340e762c0dae0d9e19ceaf3d5f9163fa0"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.035413    3728 scope.go:117] "RemoveContainer" containerID="c051fb73177f2b8f53a64318b2c9c40a7d3325c2de510fab240ab9e74127c500"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: E0703 23:58:52.161543    3728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-672261?timeout=10s\": dial tcp 192.168.61.246:8443: connect: connection refused" interval="800ms"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: I0703 23:58:52.247681    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:52 pause-672261 kubelet[3728]: E0703 23:58:52.249126    3728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.246:8443: connect: connection refused" node="pause-672261"
	Jul 03 23:58:53 pause-672261 kubelet[3728]: I0703 23:58:53.052265    3728 kubelet_node_status.go:73] "Attempting to register node" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.223296    3728 kubelet_node_status.go:112] "Node was previously registered" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.223687    3728 kubelet_node_status.go:76] "Successfully registered node" node="pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.225691    3728 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.226800    3728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: E0703 23:58:55.347857    3728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-672261\" already exists" pod="kube-system/etcd-pause-672261"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.536454    3728 apiserver.go:52] "Watching apiserver"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.539984    3728 topology_manager.go:215] "Topology Admit Handler" podUID="ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2" podNamespace="kube-system" podName="kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.540286    3728 topology_manager.go:215] "Topology Admit Handler" podUID="9b7401eb-5d71-440b-ac85-f1a3ab07de21" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sr5wm"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.541590    3728 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.565601    3728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2-xtables-lock\") pod \"kube-proxy-mwcv2\" (UID: \"ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2\") " pod="kube-system/kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.565868    3728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2-lib-modules\") pod \"kube-proxy-mwcv2\" (UID: \"ef424c8b-cb9f-4d29-8c1c-61a5db18ceb2\") " pod="kube-system/kube-proxy-mwcv2"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.841438    3728 scope.go:117] "RemoveContainer" containerID="e655f54b5cf30a17316bc7ab25e05d48d2099763dea5f1873754c04eee91b0e9"
	Jul 03 23:58:55 pause-672261 kubelet[3728]: I0703 23:58:55.842951    3728 scope.go:117] "RemoveContainer" containerID="7580dd19096e8dfdeedb31871cc6752208e7b1688abfc59c76d9d64fbdb5b18f"
	Jul 03 23:59:00 pause-672261 kubelet[3728]: I0703 23:59:00.399898    3728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-672261 -n pause-672261
helpers_test.go:261: (dbg) Run:  kubectl --context pause-672261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m56.571950983s)

                                                
                                                
-- stdout --
	* [old-k8s-version-979033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-979033" primary control-plane node in "old-k8s-version-979033" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:59:46.524131   58668 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:59:46.524243   58668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:59:46.524252   58668 out.go:304] Setting ErrFile to fd 2...
	I0703 23:59:46.524256   58668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:59:46.524426   58668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:59:46.525004   58668 out.go:298] Setting JSON to false
	I0703 23:59:46.525956   58668 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6126,"bootTime":1720045060,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:59:46.526018   58668 start.go:139] virtualization: kvm guest
	I0703 23:59:46.528249   58668 out.go:177] * [old-k8s-version-979033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:59:46.529779   58668 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:59:46.529812   58668 notify.go:220] Checking for updates...
	I0703 23:59:46.532586   58668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:59:46.534104   58668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:59:46.535523   58668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:59:46.536819   58668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:59:46.538066   58668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:59:46.540300   58668 config.go:182] Loaded profile config "cert-expiration-979438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:59:46.540454   58668 config.go:182] Loaded profile config "cert-options-768841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:59:46.540563   58668 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:59:46.541034   58668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:59:46.586885   58668 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:59:46.588289   58668 start.go:297] selected driver: kvm2
	I0703 23:59:46.588312   58668 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:59:46.588329   58668 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:59:46.589370   58668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:59:46.589478   58668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 23:59:46.606176   58668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 23:59:46.606222   58668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:59:46.606487   58668 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 23:59:46.606539   58668 cni.go:84] Creating CNI manager for ""
	I0703 23:59:46.606556   58668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 23:59:46.606572   58668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 23:59:46.606634   58668 start.go:340] cluster config:
	{Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:59:46.606764   58668 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:59:46.608417   58668 out.go:177] * Starting "old-k8s-version-979033" primary control-plane node in "old-k8s-version-979033" cluster
	I0703 23:59:46.609713   58668 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 23:59:46.609763   58668 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0703 23:59:46.609776   58668 cache.go:56] Caching tarball of preloaded images
	I0703 23:59:46.609909   58668 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0703 23:59:46.609923   58668 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0703 23:59:46.610052   58668 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0703 23:59:46.610085   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json: {Name:mk6ff6d1e0f449297d4b8a14766b568decb09b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 23:59:46.610240   58668 start.go:360] acquireMachinesLock for old-k8s-version-979033: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:00:11.029558   58668 start.go:364] duration metric: took 24.419275178s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:00:11.029649   58668 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:00:11.029779   58668 start.go:125] createHost starting for "" (driver="kvm2")
	I0704 00:00:11.033106   58668 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0704 00:00:11.033385   58668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:00:11.033447   58668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:00:11.051470   58668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I0704 00:00:11.052032   58668 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:00:11.060512   58668 main.go:141] libmachine: Using API Version  1
	I0704 00:00:11.060540   58668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:00:11.060905   58668 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:00:11.061144   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:11.061322   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:11.061568   58668 start.go:159] libmachine.API.Create for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:00:11.061620   58668 client.go:168] LocalClient.Create starting
	I0704 00:00:11.061657   58668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0704 00:00:11.061695   58668 main.go:141] libmachine: Decoding PEM data...
	I0704 00:00:11.061714   58668 main.go:141] libmachine: Parsing certificate...
	I0704 00:00:11.061778   58668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0704 00:00:11.061811   58668 main.go:141] libmachine: Decoding PEM data...
	I0704 00:00:11.061828   58668 main.go:141] libmachine: Parsing certificate...
	I0704 00:00:11.061850   58668 main.go:141] libmachine: Running pre-create checks...
	I0704 00:00:11.061861   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .PreCreateCheck
	I0704 00:00:11.062345   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:00:11.062841   58668 main.go:141] libmachine: Creating machine...
	I0704 00:00:11.062860   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .Create
	I0704 00:00:11.063031   58668 main.go:141] libmachine: (old-k8s-version-979033) Creating KVM machine...
	I0704 00:00:11.064733   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found existing default KVM network
	I0704 00:00:11.067006   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.066792   58953 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:41:7f} reservation:<nil>}
	I0704 00:00:11.068323   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.068145   58953 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:85:96} reservation:<nil>}
	I0704 00:00:11.069576   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.069502   58953 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e7:0f:46} reservation:<nil>}
	I0704 00:00:11.071029   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.070954   58953 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c1870}
	I0704 00:00:11.071273   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | created network xml: 
	I0704 00:00:11.071286   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | <network>
	I0704 00:00:11.071302   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   <name>mk-old-k8s-version-979033</name>
	I0704 00:00:11.071307   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   <dns enable='no'/>
	I0704 00:00:11.071312   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   
	I0704 00:00:11.071318   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0704 00:00:11.071325   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |     <dhcp>
	I0704 00:00:11.071336   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0704 00:00:11.071341   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |     </dhcp>
	I0704 00:00:11.071345   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   </ip>
	I0704 00:00:11.071351   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG |   
	I0704 00:00:11.071355   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | </network>
	I0704 00:00:11.071362   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | 
	I0704 00:00:11.077697   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | trying to create private KVM network mk-old-k8s-version-979033 192.168.72.0/24...
	I0704 00:00:11.175322   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | private KVM network mk-old-k8s-version-979033 192.168.72.0/24 created
	I0704 00:00:11.175351   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.175256   58953 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:00:11.175373   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033 ...
	I0704 00:00:11.175385   58668 main.go:141] libmachine: (old-k8s-version-979033) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0704 00:00:11.175469   58668 main.go:141] libmachine: (old-k8s-version-979033) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0704 00:00:11.422179   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.422067   58953 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa...
	I0704 00:00:11.529772   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.529607   58953 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/old-k8s-version-979033.rawdisk...
	I0704 00:00:11.529808   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Writing magic tar header
	I0704 00:00:11.529865   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Writing SSH key tar header
	I0704 00:00:11.529905   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033 (perms=drwx------)
	I0704 00:00:11.529945   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:11.529745   58953 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033 ...
	I0704 00:00:11.529963   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0704 00:00:11.529981   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0704 00:00:11.529994   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0704 00:00:11.530071   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033
	I0704 00:00:11.530099   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0704 00:00:11.530111   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0704 00:00:11.530125   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:00:11.530138   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0704 00:00:11.530152   58668 main.go:141] libmachine: (old-k8s-version-979033) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0704 00:00:11.530166   58668 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:00:11.530177   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0704 00:00:11.530188   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home/jenkins
	I0704 00:00:11.530201   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Checking permissions on dir: /home
	I0704 00:00:11.530211   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Skipping /home - not owner
	I0704 00:00:11.531589   58668 main.go:141] libmachine: (old-k8s-version-979033) define libvirt domain using xml: 
	I0704 00:00:11.531612   58668 main.go:141] libmachine: (old-k8s-version-979033) <domain type='kvm'>
	I0704 00:00:11.531622   58668 main.go:141] libmachine: (old-k8s-version-979033)   <name>old-k8s-version-979033</name>
	I0704 00:00:11.531630   58668 main.go:141] libmachine: (old-k8s-version-979033)   <memory unit='MiB'>2200</memory>
	I0704 00:00:11.531638   58668 main.go:141] libmachine: (old-k8s-version-979033)   <vcpu>2</vcpu>
	I0704 00:00:11.531645   58668 main.go:141] libmachine: (old-k8s-version-979033)   <features>
	I0704 00:00:11.531666   58668 main.go:141] libmachine: (old-k8s-version-979033)     <acpi/>
	I0704 00:00:11.531672   58668 main.go:141] libmachine: (old-k8s-version-979033)     <apic/>
	I0704 00:00:11.531679   58668 main.go:141] libmachine: (old-k8s-version-979033)     <pae/>
	I0704 00:00:11.531687   58668 main.go:141] libmachine: (old-k8s-version-979033)     
	I0704 00:00:11.531696   58668 main.go:141] libmachine: (old-k8s-version-979033)   </features>
	I0704 00:00:11.531702   58668 main.go:141] libmachine: (old-k8s-version-979033)   <cpu mode='host-passthrough'>
	I0704 00:00:11.531710   58668 main.go:141] libmachine: (old-k8s-version-979033)   
	I0704 00:00:11.531716   58668 main.go:141] libmachine: (old-k8s-version-979033)   </cpu>
	I0704 00:00:11.531723   58668 main.go:141] libmachine: (old-k8s-version-979033)   <os>
	I0704 00:00:11.531730   58668 main.go:141] libmachine: (old-k8s-version-979033)     <type>hvm</type>
	I0704 00:00:11.531738   58668 main.go:141] libmachine: (old-k8s-version-979033)     <boot dev='cdrom'/>
	I0704 00:00:11.531744   58668 main.go:141] libmachine: (old-k8s-version-979033)     <boot dev='hd'/>
	I0704 00:00:11.531753   58668 main.go:141] libmachine: (old-k8s-version-979033)     <bootmenu enable='no'/>
	I0704 00:00:11.531760   58668 main.go:141] libmachine: (old-k8s-version-979033)   </os>
	I0704 00:00:11.531769   58668 main.go:141] libmachine: (old-k8s-version-979033)   <devices>
	I0704 00:00:11.531777   58668 main.go:141] libmachine: (old-k8s-version-979033)     <disk type='file' device='cdrom'>
	I0704 00:00:11.531802   58668 main.go:141] libmachine: (old-k8s-version-979033)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/boot2docker.iso'/>
	I0704 00:00:11.531809   58668 main.go:141] libmachine: (old-k8s-version-979033)       <target dev='hdc' bus='scsi'/>
	I0704 00:00:11.531814   58668 main.go:141] libmachine: (old-k8s-version-979033)       <readonly/>
	I0704 00:00:11.531818   58668 main.go:141] libmachine: (old-k8s-version-979033)     </disk>
	I0704 00:00:11.531824   58668 main.go:141] libmachine: (old-k8s-version-979033)     <disk type='file' device='disk'>
	I0704 00:00:11.531830   58668 main.go:141] libmachine: (old-k8s-version-979033)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0704 00:00:11.531838   58668 main.go:141] libmachine: (old-k8s-version-979033)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/old-k8s-version-979033.rawdisk'/>
	I0704 00:00:11.531843   58668 main.go:141] libmachine: (old-k8s-version-979033)       <target dev='hda' bus='virtio'/>
	I0704 00:00:11.531848   58668 main.go:141] libmachine: (old-k8s-version-979033)     </disk>
	I0704 00:00:11.531852   58668 main.go:141] libmachine: (old-k8s-version-979033)     <interface type='network'>
	I0704 00:00:11.531858   58668 main.go:141] libmachine: (old-k8s-version-979033)       <source network='mk-old-k8s-version-979033'/>
	I0704 00:00:11.531862   58668 main.go:141] libmachine: (old-k8s-version-979033)       <model type='virtio'/>
	I0704 00:00:11.531867   58668 main.go:141] libmachine: (old-k8s-version-979033)     </interface>
	I0704 00:00:11.531907   58668 main.go:141] libmachine: (old-k8s-version-979033)     <interface type='network'>
	I0704 00:00:11.531918   58668 main.go:141] libmachine: (old-k8s-version-979033)       <source network='default'/>
	I0704 00:00:11.531925   58668 main.go:141] libmachine: (old-k8s-version-979033)       <model type='virtio'/>
	I0704 00:00:11.531937   58668 main.go:141] libmachine: (old-k8s-version-979033)     </interface>
	I0704 00:00:11.531943   58668 main.go:141] libmachine: (old-k8s-version-979033)     <serial type='pty'>
	I0704 00:00:11.531950   58668 main.go:141] libmachine: (old-k8s-version-979033)       <target port='0'/>
	I0704 00:00:11.531956   58668 main.go:141] libmachine: (old-k8s-version-979033)     </serial>
	I0704 00:00:11.531965   58668 main.go:141] libmachine: (old-k8s-version-979033)     <console type='pty'>
	I0704 00:00:11.531972   58668 main.go:141] libmachine: (old-k8s-version-979033)       <target type='serial' port='0'/>
	I0704 00:00:11.531981   58668 main.go:141] libmachine: (old-k8s-version-979033)     </console>
	I0704 00:00:11.531992   58668 main.go:141] libmachine: (old-k8s-version-979033)     <rng model='virtio'>
	I0704 00:00:11.532004   58668 main.go:141] libmachine: (old-k8s-version-979033)       <backend model='random'>/dev/random</backend>
	I0704 00:00:11.532019   58668 main.go:141] libmachine: (old-k8s-version-979033)     </rng>
	I0704 00:00:11.532029   58668 main.go:141] libmachine: (old-k8s-version-979033)     
	I0704 00:00:11.532040   58668 main.go:141] libmachine: (old-k8s-version-979033)     
	I0704 00:00:11.532052   58668 main.go:141] libmachine: (old-k8s-version-979033)   </devices>
	I0704 00:00:11.532062   58668 main.go:141] libmachine: (old-k8s-version-979033) </domain>
	I0704 00:00:11.532074   58668 main.go:141] libmachine: (old-k8s-version-979033) 
	I0704 00:00:11.537876   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:ab:fe:c3 in network default
	I0704 00:00:11.538609   58668 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:00:11.538637   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:11.539869   58668 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:00:11.540296   58668 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:00:11.540998   58668 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:00:11.541831   58668 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:00:12.952228   58668 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:00:12.953098   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:12.953567   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:12.953592   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:12.953536   58953 retry.go:31] will retry after 229.874564ms: waiting for machine to come up
	I0704 00:00:13.185226   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:13.185908   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:13.185948   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:13.185845   58953 retry.go:31] will retry after 357.943475ms: waiting for machine to come up
	I0704 00:00:13.545847   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:13.547053   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:13.547092   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:13.546916   58953 retry.go:31] will retry after 388.846207ms: waiting for machine to come up
	I0704 00:00:13.937454   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:13.938106   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:13.938129   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:13.938065   58953 retry.go:31] will retry after 404.621139ms: waiting for machine to come up
	I0704 00:00:14.345022   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:14.345520   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:14.345543   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:14.345470   58953 retry.go:31] will retry after 595.571574ms: waiting for machine to come up
	I0704 00:00:14.942516   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:14.943163   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:14.943201   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:14.943128   58953 retry.go:31] will retry after 722.971695ms: waiting for machine to come up
	I0704 00:00:15.668231   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:15.668730   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:15.668758   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:15.668686   58953 retry.go:31] will retry after 973.036812ms: waiting for machine to come up
	I0704 00:00:16.643566   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:16.644023   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:16.644053   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:16.643978   58953 retry.go:31] will retry after 1.072559314s: waiting for machine to come up
	I0704 00:00:17.717804   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:17.718238   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:17.718268   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:17.718183   58953 retry.go:31] will retry after 1.437120655s: waiting for machine to come up
	I0704 00:00:19.157375   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:19.157988   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:19.158012   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:19.157930   58953 retry.go:31] will retry after 1.605923846s: waiting for machine to come up
	I0704 00:00:20.765160   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:20.765665   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:20.765687   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:20.765625   58953 retry.go:31] will retry after 2.476639753s: waiting for machine to come up
	I0704 00:00:23.243743   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:23.244355   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:23.244380   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:23.244300   58953 retry.go:31] will retry after 2.495332847s: waiting for machine to come up
	I0704 00:00:25.741655   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:25.742213   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:25.742242   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:25.742165   58953 retry.go:31] will retry after 3.177310247s: waiting for machine to come up
	I0704 00:00:28.921061   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:28.921681   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:00:28.921713   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:00:28.921628   58953 retry.go:31] will retry after 4.903759878s: waiting for machine to come up
	I0704 00:00:34.193082   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.193582   58668 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:00:34.193603   58668 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:00:34.193615   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.194129   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033
	I0704 00:00:34.287211   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:00:34.287243   58668 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:00:34.287256   58668 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:00:34.290658   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.291202   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.291237   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.291488   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:00:34.291517   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:00:34.291544   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:00:34.291558   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:00:34.291576   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:00:34.424824   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:00:34.425131   58668 main.go:141] libmachine: (old-k8s-version-979033) KVM machine creation complete!
	I0704 00:00:34.425493   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:00:34.426009   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:34.426260   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:34.426445   58668 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:00:34.426474   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:00:34.428029   58668 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:00:34.428044   58668 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:00:34.428052   58668 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:00:34.428059   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.430691   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.431087   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.431132   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.431296   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.431491   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.431650   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.431805   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.432028   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.432321   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.432338   58668 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:00:34.539462   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:00:34.539484   58668 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:00:34.539495   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.542732   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.543162   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.543203   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.543442   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.543670   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.543844   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.544009   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.544159   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.544368   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.544383   58668 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:00:34.649075   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:00:34.649229   58668 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:00:34.649277   58668 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:00:34.649296   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.649574   58668 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:00:34.649616   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.649828   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.653644   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.654057   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.654087   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.654298   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.654528   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.654687   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.654837   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.655006   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.655196   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.655211   58668 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:00:34.774694   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:00:34.774730   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.778383   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.778792   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.778819   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.779144   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:34.779431   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.779665   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:34.779835   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:34.780051   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:34.780283   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:34.780311   58668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:00:34.896690   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:00:34.896722   58668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:00:34.896770   58668 buildroot.go:174] setting up certificates
	I0704 00:00:34.896782   58668 provision.go:84] configureAuth start
	I0704 00:00:34.896798   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:00:34.897094   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:34.900289   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.900680   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.900722   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.900922   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:34.903648   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.904043   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:34.904074   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:34.904259   58668 provision.go:143] copyHostCerts
	I0704 00:00:34.904321   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:00:34.904333   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:00:34.904390   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:00:34.904493   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:00:34.904502   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:00:34.904522   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:00:34.904593   58668 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:00:34.904601   58668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:00:34.904617   58668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:00:34.904689   58668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:00:35.181466   58668 provision.go:177] copyRemoteCerts
	I0704 00:00:35.181532   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:00:35.181563   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.184683   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.185081   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.185111   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.185300   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.185530   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.185673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.185805   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.271503   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:00:35.299933   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:00:35.330063   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:00:35.357363   58668 provision.go:87] duration metric: took 460.563889ms to configureAuth
	I0704 00:00:35.357393   58668 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:00:35.357589   58668 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:00:35.357657   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.360333   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.360775   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.360809   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.361013   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.361262   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.361428   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.361577   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.361749   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:35.361929   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:35.361950   58668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:00:35.652917   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:00:35.652950   58668 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:00:35.652961   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetURL
	I0704 00:00:35.654259   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using libvirt version 6000000
	I0704 00:00:35.656886   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.657471   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.657514   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.657660   58668 main.go:141] libmachine: Docker is up and running!
	I0704 00:00:35.657680   58668 main.go:141] libmachine: Reticulating splines...
	I0704 00:00:35.657689   58668 client.go:171] duration metric: took 24.596057721s to LocalClient.Create
	I0704 00:00:35.657718   58668 start.go:167] duration metric: took 24.596150696s to libmachine.API.Create "old-k8s-version-979033"
	I0704 00:00:35.657729   58668 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:00:35.657741   58668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:00:35.657759   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.658068   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:00:35.658096   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.660695   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.661057   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.661090   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.661228   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.661464   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.661673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.661914   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.743345   58668 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:00:35.748645   58668 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:00:35.748676   58668 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:00:35.748765   58668 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:00:35.748855   58668 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:00:35.748962   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:00:35.761598   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:35.798851   58668 start.go:296] duration metric: took 141.105745ms for postStartSetup
	I0704 00:00:35.798934   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:00:35.799748   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:35.803424   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.803835   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.803866   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.804157   58668 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:00:35.804400   58668 start.go:128] duration metric: took 24.774599729s to createHost
	I0704 00:00:35.804426   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.807787   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.808505   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.808530   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.808863   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.809112   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.809306   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.809479   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.809710   58668 main.go:141] libmachine: Using SSH client type: native
	I0704 00:00:35.809942   58668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:00:35.809975   58668 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0704 00:00:35.935777   58668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051235.907716412
	
	I0704 00:00:35.935804   58668 fix.go:216] guest clock: 1720051235.907716412
	I0704 00:00:35.935813   58668 fix.go:229] Guest: 2024-07-04 00:00:35.907716412 +0000 UTC Remote: 2024-07-04 00:00:35.804412433 +0000 UTC m=+49.322768963 (delta=103.303979ms)
	I0704 00:00:35.935859   58668 fix.go:200] guest clock delta is within tolerance: 103.303979ms
	I0704 00:00:35.935865   58668 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 24.906272196s
	I0704 00:00:35.935966   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.936814   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:35.941084   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.941480   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.941520   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.941865   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.942837   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.943050   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:00:35.943137   58668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:00:35.943177   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.943748   58668 ssh_runner.go:195] Run: cat /version.json
	I0704 00:00:35.943811   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:00:35.947102   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.947522   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.947552   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.947673   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.947830   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.947980   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.948093   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:35.948380   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.949076   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:35.949120   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:35.949080   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:00:35.949301   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:00:35.949506   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:00:35.949673   58668 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:00:36.052844   58668 ssh_runner.go:195] Run: systemctl --version
	I0704 00:00:36.061480   58668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:00:36.258318   58668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:00:36.264963   58668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:00:36.265044   58668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:00:36.288798   58668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:00:36.288828   58668 start.go:494] detecting cgroup driver to use...
	I0704 00:00:36.288957   58668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:00:36.312074   58668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:00:36.333588   58668 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:00:36.333654   58668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:00:36.350147   58668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:00:36.366618   58668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:00:36.522633   58668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:00:36.698779   58668 docker.go:233] disabling docker service ...
	I0704 00:00:36.698852   58668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:00:36.714705   58668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:00:36.730437   58668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:00:36.880225   58668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:00:37.003093   58668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:00:37.019184   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:00:37.041860   58668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:00:37.041942   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.054186   58668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:00:37.054266   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.066575   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.078385   58668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:00:37.090041   58668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:00:37.101810   58668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:00:37.112061   58668 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:00:37.112123   58668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:00:37.126731   58668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:00:37.137332   58668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:37.253842   58668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:00:37.399126   58668 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:00:37.399202   58668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:00:37.405147   58668 start.go:562] Will wait 60s for crictl version
	I0704 00:00:37.405228   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:37.410118   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:00:37.454702   58668 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:00:37.454799   58668 ssh_runner.go:195] Run: crio --version
	I0704 00:00:37.486440   58668 ssh_runner.go:195] Run: crio --version
	I0704 00:00:37.523250   58668 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:00:37.524674   58668 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:00:37.528321   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:37.528773   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:00:26 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:00:37.528806   58668 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:00:37.529074   58668 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:00:37.533829   58668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:00:37.548193   58668 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:00:37.548296   58668 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:00:37.548341   58668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:37.581326   58668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:00:37.581394   58668 ssh_runner.go:195] Run: which lz4
	I0704 00:00:37.585683   58668 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0704 00:00:37.590254   58668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:00:37.590295   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:00:39.382825   58668 crio.go:462] duration metric: took 1.797183726s to copy over tarball
	I0704 00:00:39.382905   58668 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:00:42.010567   58668 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.627639356s)
	I0704 00:00:42.010590   58668 crio.go:469] duration metric: took 2.62773709s to extract the tarball
	I0704 00:00:42.010596   58668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:00:42.055377   58668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:00:42.109077   58668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:00:42.109105   58668 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:00:42.109171   58668 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.109200   58668 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.109215   58668 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.109243   58668 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.109245   58668 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.109180   58668 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:42.109192   58668 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.109172   58668 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.110781   58668 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:00:42.110802   58668 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.110804   58668 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.110688   58668 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.111193   58668 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:42.111193   58668 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.245608   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.245692   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.271622   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.275561   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:00:42.282195   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.287075   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.294094   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.357424   58668 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:00:42.357477   58668 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.357526   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.357426   58668 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:00:42.357564   58668 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.357612   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.469161   58668 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:00:42.469224   58668 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.469235   58668 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:00:42.469276   58668 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:00:42.469291   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.469314   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475133   58668 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:00:42.475176   58668 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.475133   58668 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:00:42.475199   58668 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:00:42.475273   58668 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.475306   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:00:42.475223   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475229   58668 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.475359   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:00:42.475365   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.475317   58668 ssh_runner.go:195] Run: which crictl
	I0704 00:00:42.478790   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:00:42.478845   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:00:42.494284   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:00:42.597531   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:00:42.597555   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:00:42.597641   58668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:00:42.597698   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:00:42.608375   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:00:42.622388   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:00:42.622417   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:00:42.660625   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:00:42.660706   58668 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:00:43.054517   58668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:00:43.202873   58668 cache_images.go:92] duration metric: took 1.09374743s to LoadCachedImages
	W0704 00:00:43.202972   58668 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0704 00:00:43.202986   58668 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:00:43.203135   58668 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:00:43.203223   58668 ssh_runner.go:195] Run: crio config
	I0704 00:00:43.253953   58668 cni.go:84] Creating CNI manager for ""
	I0704 00:00:43.253977   58668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:00:43.253991   58668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:00:43.254008   58668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:00:43.254130   58668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:00:43.254190   58668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:00:43.265103   58668 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:00:43.265192   58668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:00:43.276514   58668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:00:43.296018   58668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:00:43.315316   58668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:00:43.336710   58668 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:00:43.341342   58668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:00:43.355964   58668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:00:43.500843   58668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:00:43.524631   58668 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:00:43.524652   58668 certs.go:194] generating shared ca certs ...
	I0704 00:00:43.524671   58668 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.524848   58668 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:00:43.524902   58668 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:00:43.524912   58668 certs.go:256] generating profile certs ...
	I0704 00:00:43.524973   58668 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:00:43.524990   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt with IP's: []
	I0704 00:00:43.619765   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt ...
	I0704 00:00:43.619806   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: {Name:mk13943ef89de34563b29919cad0616fe1b722cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.620047   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key ...
	I0704 00:00:43.620067   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key: {Name:mkc6d8ee950b14185bbf145e473cc770da0d0701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.620172   58668 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:00:43.620197   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.59]
	I0704 00:00:43.891835   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 ...
	I0704 00:00:43.891893   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654: {Name:mkfe335fd2a0295f5a178250d5c91bcad947a780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.892118   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654 ...
	I0704 00:00:43.892135   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654: {Name:mk06732d285a768ca53c049e45b3db597235096e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:43.892249   58668 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt.03500654 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt
	I0704 00:00:43.892354   58668 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key
	I0704 00:00:43.892430   58668 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:00:43.892452   58668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt with IP's: []
	I0704 00:00:44.099004   58668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt ...
	I0704 00:00:44.099034   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt: {Name:mk36429fdd458e014e892ad0f7c7c76835412fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:44.099234   58668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key ...
	I0704 00:00:44.099250   58668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key: {Name:mke3a67ecf41e139d9d452f615a814e40df9a677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:00:44.099464   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:00:44.099511   58668 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:00:44.099525   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:00:44.099557   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:00:44.099586   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:00:44.099619   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:00:44.099669   58668 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:00:44.100311   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:00:44.131421   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:00:44.160085   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:00:44.189585   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:00:44.221528   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:00:44.252058   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:00:44.317570   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:00:44.380008   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:00:44.425503   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:00:44.458892   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:00:44.503812   58668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:00:44.546054   58668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:00:44.578740   58668 ssh_runner.go:195] Run: openssl version
	I0704 00:00:44.587478   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:00:44.613853   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.621728   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.621798   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:00:44.630611   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:00:44.645688   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:00:44.660551   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.667386   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.667446   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:00:44.674883   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:00:44.688293   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:00:44.702284   58668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.709447   58668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.709527   58668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:00:44.718476   58668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:00:44.734843   58668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:00:44.741247   58668 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:00:44.741307   58668 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:00:44.741403   58668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:00:44.741461   58668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:00:44.800262   58668 cri.go:89] found id: ""
	I0704 00:00:44.800349   58668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:00:44.817664   58668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:00:44.835091   58668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:00:44.852434   58668 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:00:44.852457   58668 kubeadm.go:156] found existing configuration files:
	
	I0704 00:00:44.852513   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:00:44.865473   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:00:44.865546   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:00:44.879692   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:00:44.892084   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:00:44.892149   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:00:44.905556   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:00:44.920377   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:00:44.920449   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:00:44.936126   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:00:44.951110   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:00:44.951174   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:00:44.965948   58668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:00:45.320735   58668 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:02:44.086037   58668 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:02:44.087762   58668 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:02:44.087958   58668 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:02:44.088058   58668 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:02:44.088222   58668 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:02:44.088667   58668 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:02:44.088980   58668 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:02:44.089356   58668 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:02:44.091347   58668 out.go:204]   - Generating certificates and keys ...
	I0704 00:02:44.091456   58668 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:02:44.091545   58668 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:02:44.091650   58668 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0704 00:02:44.091735   58668 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0704 00:02:44.091830   58668 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0704 00:02:44.091923   58668 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0704 00:02:44.092010   58668 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0704 00:02:44.092184   58668 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0704 00:02:44.092264   58668 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0704 00:02:44.092405   58668 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0704 00:02:44.092492   58668 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0704 00:02:44.092573   58668 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0704 00:02:44.092626   58668 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0704 00:02:44.092719   58668 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:02:44.092794   58668 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:02:44.092873   58668 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:02:44.092969   58668 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:02:44.093048   58668 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:02:44.093176   58668 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:02:44.093298   58668 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:02:44.093365   58668 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:02:44.093454   58668 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:02:44.095936   58668 out.go:204]   - Booting up control plane ...
	I0704 00:02:44.096055   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:02:44.096155   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:02:44.096250   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:02:44.096357   58668 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:02:44.096584   58668 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:02:44.096677   58668 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:02:44.096777   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:02:44.096984   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:02:44.097060   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:02:44.097218   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:02:44.097319   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:02:44.097591   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:02:44.097687   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:02:44.097880   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:02:44.097986   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:02:44.098235   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:02:44.098262   58668 kubeadm.go:309] 
	I0704 00:02:44.098326   58668 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:02:44.098388   58668 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:02:44.098397   58668 kubeadm.go:309] 
	I0704 00:02:44.098452   58668 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:02:44.098508   58668 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:02:44.098606   58668 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:02:44.098613   58668 kubeadm.go:309] 
	I0704 00:02:44.098700   58668 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:02:44.098729   58668 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:02:44.098760   58668 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:02:44.098767   58668 kubeadm.go:309] 
	I0704 00:02:44.098918   58668 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:02:44.099026   58668 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:02:44.099039   58668 kubeadm.go:309] 
	I0704 00:02:44.099153   58668 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:02:44.099266   58668 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:02:44.099361   58668 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:02:44.099458   58668 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:02:44.099530   58668 kubeadm.go:309] 
	W0704 00:02:44.099625   58668 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-979033] and IPs [192.168.72.59 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:02:44.099675   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:02:46.186954   58668 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.087252943s)
	I0704 00:02:46.187042   58668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:02:46.202298   58668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:02:46.212983   58668 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:02:46.213006   58668 kubeadm.go:156] found existing configuration files:
	
	I0704 00:02:46.213069   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:02:46.223148   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:02:46.223216   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:02:46.234137   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:02:46.245663   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:02:46.245754   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:02:46.256829   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:02:46.267622   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:02:46.267716   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:02:46.278718   58668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:02:46.289343   58668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:02:46.289412   58668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:02:46.300276   58668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:02:46.378890   58668 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:02:46.379058   58668 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:02:46.537994   58668 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:02:46.538133   58668 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:02:46.538264   58668 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:02:46.764122   58668 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:02:46.766061   58668 out.go:204]   - Generating certificates and keys ...
	I0704 00:02:46.766182   58668 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:02:46.766302   58668 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:02:46.766430   58668 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:02:46.766515   58668 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:02:46.766618   58668 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:02:46.766692   58668 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:02:46.766794   58668 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:02:46.766887   58668 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:02:46.766994   58668 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:02:46.767125   58668 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:02:46.767189   58668 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:02:46.767285   58668 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:02:46.948066   58668 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:02:47.032510   58668 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:02:47.144262   58668 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:02:47.231205   58668 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:02:47.246565   58668 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:02:47.247779   58668 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:02:47.247845   58668 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:02:47.401949   58668 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:02:47.404143   58668 out.go:204]   - Booting up control plane ...
	I0704 00:02:47.404275   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:02:47.408887   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:02:47.410003   58668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:02:47.419072   58668 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:02:47.422047   58668 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:03:27.424912   58668 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:03:27.425045   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:03:27.425219   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:03:32.425983   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:03:32.426257   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:03:42.426668   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:03:42.426893   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:04:02.426487   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:04:02.426722   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:04:42.426352   58668 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:04:42.426585   58668 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:04:42.426602   58668 kubeadm.go:309] 
	I0704 00:04:42.426654   58668 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:04:42.426715   58668 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:04:42.426728   58668 kubeadm.go:309] 
	I0704 00:04:42.426777   58668 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:04:42.426823   58668 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:04:42.426947   58668 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:04:42.426960   58668 kubeadm.go:309] 
	I0704 00:04:42.427100   58668 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:04:42.427162   58668 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:04:42.427220   58668 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:04:42.427231   58668 kubeadm.go:309] 
	I0704 00:04:42.427376   58668 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:04:42.427496   58668 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:04:42.427508   58668 kubeadm.go:309] 
	I0704 00:04:42.427628   58668 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:04:42.427747   58668 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:04:42.427855   58668 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:04:42.427948   58668 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:04:42.427958   58668 kubeadm.go:309] 
	I0704 00:04:42.429690   58668 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:04:42.429810   58668 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:04:42.429959   58668 kubeadm.go:393] duration metric: took 3m57.688653134s to StartCluster
	I0704 00:04:42.429981   58668 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:04:42.430014   58668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:04:42.430078   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:04:42.479399   58668 cri.go:89] found id: ""
	I0704 00:04:42.479428   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.479438   58668 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:04:42.479445   58668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:04:42.479510   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:04:42.515478   58668 cri.go:89] found id: ""
	I0704 00:04:42.515500   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.515507   58668 logs.go:278] No container was found matching "etcd"
	I0704 00:04:42.515514   58668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:04:42.515564   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:04:42.551390   58668 cri.go:89] found id: ""
	I0704 00:04:42.551427   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.551437   58668 logs.go:278] No container was found matching "coredns"
	I0704 00:04:42.551444   58668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:04:42.551509   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:04:42.587011   58668 cri.go:89] found id: ""
	I0704 00:04:42.587033   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.587041   58668 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:04:42.587047   58668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:04:42.587104   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:04:42.624127   58668 cri.go:89] found id: ""
	I0704 00:04:42.624153   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.624162   58668 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:04:42.624168   58668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:04:42.624214   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:04:42.659967   58668 cri.go:89] found id: ""
	I0704 00:04:42.659989   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.659997   58668 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:04:42.660002   58668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:04:42.660060   58668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:04:42.697633   58668 cri.go:89] found id: ""
	I0704 00:04:42.697656   58668 logs.go:276] 0 containers: []
	W0704 00:04:42.697663   58668 logs.go:278] No container was found matching "kindnet"
	I0704 00:04:42.697672   58668 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:04:42.697683   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:04:42.814556   58668 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:04:42.814579   58668 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:04:42.814590   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:04:42.914948   58668 logs.go:123] Gathering logs for container status ...
	I0704 00:04:42.914983   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:04:42.970989   58668 logs.go:123] Gathering logs for kubelet ...
	I0704 00:04:42.971019   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:04:43.022403   58668 logs.go:123] Gathering logs for dmesg ...
	I0704 00:04:43.022437   58668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0704 00:04:43.036189   58668 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:04:43.036226   58668 out.go:239] * 
	* 
	W0704 00:04:43.036287   58668 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:04:43.036312   58668 out.go:239] * 
	* 
	W0704 00:04:43.037153   58668 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:04:43.042522   58668 out.go:177] 
	W0704 00:04:43.043670   58668 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:04:43.043718   58668 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:04:43.043740   58668 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:04:43.045214   58668 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 6 (223.215589ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:04:43.317923   61680 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-979033" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-317739 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-317739 --alsologtostderr -v=3: exit status 82 (2m0.541191339s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-317739"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0704 00:02:49.487997   60927 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:02:49.488388   60927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:02:49.488401   60927 out.go:304] Setting ErrFile to fd 2...
	I0704 00:02:49.488407   60927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:02:49.488644   60927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:02:49.488904   60927 out.go:298] Setting JSON to false
	I0704 00:02:49.489008   60927 mustload.go:65] Loading cluster: no-preload-317739
	I0704 00:02:49.489457   60927 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:02:49.489564   60927 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:02:49.489777   60927 mustload.go:65] Loading cluster: no-preload-317739
	I0704 00:02:49.489936   60927 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:02:49.489972   60927 stop.go:39] StopHost: no-preload-317739
	I0704 00:02:49.490432   60927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:02:49.490491   60927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:02:49.505982   60927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0704 00:02:49.506571   60927 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:02:49.507192   60927 main.go:141] libmachine: Using API Version  1
	I0704 00:02:49.507216   60927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:02:49.507678   60927 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:02:49.510447   60927 out.go:177] * Stopping node "no-preload-317739"  ...
	I0704 00:02:49.511921   60927 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0704 00:02:49.511973   60927 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:02:49.512315   60927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0704 00:02:49.512361   60927 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:02:49.515908   60927 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:02:49.516376   60927 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:02:49.516409   60927 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:02:49.516641   60927 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:02:49.516842   60927 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:02:49.516999   60927 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:02:49.517169   60927 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:02:49.652161   60927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0704 00:02:49.700547   60927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0704 00:02:49.775681   60927 main.go:141] libmachine: Stopping "no-preload-317739"...
	I0704 00:02:49.775716   60927 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:02:49.777615   60927 main.go:141] libmachine: (no-preload-317739) Calling .Stop
	I0704 00:02:49.782208   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 0/120
	I0704 00:02:50.783692   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 1/120
	I0704 00:02:51.785316   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 2/120
	I0704 00:02:52.786708   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 3/120
	I0704 00:02:53.788443   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 4/120
	I0704 00:02:54.790355   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 5/120
	I0704 00:02:55.791963   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 6/120
	I0704 00:02:56.793296   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 7/120
	I0704 00:02:57.794744   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 8/120
	I0704 00:02:58.796145   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 9/120
	I0704 00:02:59.798473   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 10/120
	I0704 00:03:00.800088   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 11/120
	I0704 00:03:01.801396   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 12/120
	I0704 00:03:02.802786   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 13/120
	I0704 00:03:03.804089   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 14/120
	I0704 00:03:04.806172   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 15/120
	I0704 00:03:05.807525   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 16/120
	I0704 00:03:06.809077   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 17/120
	I0704 00:03:07.811430   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 18/120
	I0704 00:03:08.813758   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 19/120
	I0704 00:03:09.816734   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 20/120
	I0704 00:03:10.819260   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 21/120
	I0704 00:03:11.820566   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 22/120
	I0704 00:03:12.822388   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 23/120
	I0704 00:03:13.824322   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 24/120
	I0704 00:03:14.826255   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 25/120
	I0704 00:03:15.827754   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 26/120
	I0704 00:03:16.829874   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 27/120
	I0704 00:03:17.831534   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 28/120
	I0704 00:03:18.833056   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 29/120
	I0704 00:03:19.835385   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 30/120
	I0704 00:03:20.837542   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 31/120
	I0704 00:03:21.838957   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 32/120
	I0704 00:03:22.841066   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 33/120
	I0704 00:03:23.842366   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 34/120
	I0704 00:03:24.844373   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 35/120
	I0704 00:03:25.845673   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 36/120
	I0704 00:03:26.846880   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 37/120
	I0704 00:03:27.848328   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 38/120
	I0704 00:03:28.849756   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 39/120
	I0704 00:03:29.851144   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 40/120
	I0704 00:03:30.852599   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 41/120
	I0704 00:03:31.854055   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 42/120
	I0704 00:03:32.855503   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 43/120
	I0704 00:03:33.856977   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 44/120
	I0704 00:03:34.859124   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 45/120
	I0704 00:03:35.861212   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 46/120
	I0704 00:03:36.862462   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 47/120
	I0704 00:03:37.863764   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 48/120
	I0704 00:03:38.865018   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 49/120
	I0704 00:03:39.867134   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 50/120
	I0704 00:03:40.868793   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 51/120
	I0704 00:03:41.869918   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 52/120
	I0704 00:03:42.871369   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 53/120
	I0704 00:03:43.872699   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 54/120
	I0704 00:03:44.874904   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 55/120
	I0704 00:03:45.876351   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 56/120
	I0704 00:03:46.878426   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 57/120
	I0704 00:03:47.879613   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 58/120
	I0704 00:03:48.881047   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 59/120
	I0704 00:03:49.882405   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 60/120
	I0704 00:03:50.883632   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 61/120
	I0704 00:03:51.884836   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 62/120
	I0704 00:03:52.886208   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 63/120
	I0704 00:03:53.887477   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 64/120
	I0704 00:03:54.889467   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 65/120
	I0704 00:03:55.890786   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 66/120
	I0704 00:03:56.892172   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 67/120
	I0704 00:03:57.894222   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 68/120
	I0704 00:03:58.895899   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 69/120
	I0704 00:03:59.897101   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 70/120
	I0704 00:04:00.898930   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 71/120
	I0704 00:04:01.900251   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 72/120
	I0704 00:04:02.901599   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 73/120
	I0704 00:04:03.902968   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 74/120
	I0704 00:04:04.904922   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 75/120
	I0704 00:04:05.906327   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 76/120
	I0704 00:04:06.907665   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 77/120
	I0704 00:04:07.910187   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 78/120
	I0704 00:04:08.911497   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 79/120
	I0704 00:04:09.913659   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 80/120
	I0704 00:04:10.915050   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 81/120
	I0704 00:04:11.916455   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 82/120
	I0704 00:04:12.917823   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 83/120
	I0704 00:04:13.919203   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 84/120
	I0704 00:04:14.921226   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 85/120
	I0704 00:04:15.922743   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 86/120
	I0704 00:04:16.924487   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 87/120
	I0704 00:04:17.926586   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 88/120
	I0704 00:04:18.928101   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 89/120
	I0704 00:04:19.929428   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 90/120
	I0704 00:04:20.931645   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 91/120
	I0704 00:04:21.933147   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 92/120
	I0704 00:04:22.934491   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 93/120
	I0704 00:04:23.935850   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 94/120
	I0704 00:04:24.937341   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 95/120
	I0704 00:04:25.938793   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 96/120
	I0704 00:04:26.940185   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 97/120
	I0704 00:04:27.941597   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 98/120
	I0704 00:04:28.943113   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 99/120
	I0704 00:04:29.945502   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 100/120
	I0704 00:04:30.947089   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 101/120
	I0704 00:04:31.948419   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 102/120
	I0704 00:04:32.949816   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 103/120
	I0704 00:04:33.951116   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 104/120
	I0704 00:04:34.952901   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 105/120
	I0704 00:04:35.954265   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 106/120
	I0704 00:04:36.955441   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 107/120
	I0704 00:04:37.956808   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 108/120
	I0704 00:04:38.958275   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 109/120
	I0704 00:04:39.960288   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 110/120
	I0704 00:04:40.962275   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 111/120
	I0704 00:04:41.963857   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 112/120
	I0704 00:04:42.965333   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 113/120
	I0704 00:04:43.966766   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 114/120
	I0704 00:04:44.968653   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 115/120
	I0704 00:04:45.970045   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 116/120
	I0704 00:04:46.971397   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 117/120
	I0704 00:04:47.972916   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 118/120
	I0704 00:04:48.974483   60927 main.go:141] libmachine: (no-preload-317739) Waiting for machine to stop 119/120
	I0704 00:04:49.975236   60927 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0704 00:04:49.975295   60927 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0704 00:04:49.977242   60927 out.go:177] 
	W0704 00:04:49.978650   60927 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0704 00:04:49.978669   60927 out.go:239] * 
	* 
	W0704 00:04:49.981398   60927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:04:49.982654   60927 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-317739 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739: exit status 3 (18.427242627s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:08.412190   61832 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host
	E0704 00:05:08.412211   61832 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-317739" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-687975 --alsologtostderr -v=3
E0704 00:03:57.357710   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-687975 --alsologtostderr -v=3: exit status 82 (2m0.525843066s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-687975"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0704 00:03:22.491327   61268 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:03:22.491445   61268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:03:22.491456   61268 out.go:304] Setting ErrFile to fd 2...
	I0704 00:03:22.491461   61268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:03:22.491671   61268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:03:22.491936   61268 out.go:298] Setting JSON to false
	I0704 00:03:22.492026   61268 mustload.go:65] Loading cluster: embed-certs-687975
	I0704 00:03:22.492387   61268 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:03:22.492462   61268 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:03:22.492630   61268 mustload.go:65] Loading cluster: embed-certs-687975
	I0704 00:03:22.492737   61268 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:03:22.492764   61268 stop.go:39] StopHost: embed-certs-687975
	I0704 00:03:22.493090   61268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:03:22.493147   61268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:03:22.510511   61268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0704 00:03:22.510988   61268 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:03:22.511653   61268 main.go:141] libmachine: Using API Version  1
	I0704 00:03:22.511686   61268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:03:22.512144   61268 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:03:22.514704   61268 out.go:177] * Stopping node "embed-certs-687975"  ...
	I0704 00:03:22.516501   61268 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0704 00:03:22.516552   61268 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:03:22.516909   61268 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0704 00:03:22.516959   61268 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:03:22.520420   61268 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:03:22.520931   61268 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:01:47 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:03:22.520964   61268 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:03:22.521089   61268 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:03:22.521309   61268 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:03:22.521460   61268 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:03:22.521654   61268 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:03:22.636143   61268 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0704 00:03:22.704518   61268 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0704 00:03:22.763571   61268 main.go:141] libmachine: Stopping "embed-certs-687975"...
	I0704 00:03:22.763611   61268 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:03:22.765296   61268 main.go:141] libmachine: (embed-certs-687975) Calling .Stop
	I0704 00:03:22.769534   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 0/120
	I0704 00:03:23.770947   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 1/120
	I0704 00:03:24.772327   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 2/120
	I0704 00:03:25.774321   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 3/120
	I0704 00:03:26.775626   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 4/120
	I0704 00:03:27.777627   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 5/120
	I0704 00:03:28.778939   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 6/120
	I0704 00:03:29.780355   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 7/120
	I0704 00:03:30.782197   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 8/120
	I0704 00:03:31.783685   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 9/120
	I0704 00:03:32.785770   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 10/120
	I0704 00:03:33.787108   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 11/120
	I0704 00:03:34.788516   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 12/120
	I0704 00:03:35.789888   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 13/120
	I0704 00:03:36.791363   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 14/120
	I0704 00:03:37.793444   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 15/120
	I0704 00:03:38.795096   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 16/120
	I0704 00:03:39.796733   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 17/120
	I0704 00:03:40.798494   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 18/120
	I0704 00:03:41.799718   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 19/120
	I0704 00:03:42.801878   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 20/120
	I0704 00:03:43.804216   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 21/120
	I0704 00:03:44.805463   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 22/120
	I0704 00:03:45.807621   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 23/120
	I0704 00:03:46.809840   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 24/120
	I0704 00:03:47.811766   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 25/120
	I0704 00:03:48.813380   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 26/120
	I0704 00:03:49.815013   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 27/120
	I0704 00:03:50.816380   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 28/120
	I0704 00:03:51.817755   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 29/120
	I0704 00:03:52.819869   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 30/120
	I0704 00:03:53.821374   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 31/120
	I0704 00:03:54.822674   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 32/120
	I0704 00:03:55.824149   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 33/120
	I0704 00:03:56.825630   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 34/120
	I0704 00:03:57.827686   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 35/120
	I0704 00:03:58.830037   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 36/120
	I0704 00:03:59.831589   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 37/120
	I0704 00:04:00.833015   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 38/120
	I0704 00:04:01.834509   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 39/120
	I0704 00:04:02.836611   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 40/120
	I0704 00:04:03.838551   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 41/120
	I0704 00:04:04.839911   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 42/120
	I0704 00:04:05.841339   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 43/120
	I0704 00:04:06.842760   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 44/120
	I0704 00:04:07.844954   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 45/120
	I0704 00:04:08.846987   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 46/120
	I0704 00:04:09.848527   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 47/120
	I0704 00:04:10.849890   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 48/120
	I0704 00:04:11.851512   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 49/120
	I0704 00:04:12.853840   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 50/120
	I0704 00:04:13.856135   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 51/120
	I0704 00:04:14.857860   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 52/120
	I0704 00:04:15.860056   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 53/120
	I0704 00:04:16.861494   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 54/120
	I0704 00:04:17.863149   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 55/120
	I0704 00:04:18.864765   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 56/120
	I0704 00:04:19.866347   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 57/120
	I0704 00:04:20.867985   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 58/120
	I0704 00:04:21.869720   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 59/120
	I0704 00:04:22.871522   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 60/120
	I0704 00:04:23.873027   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 61/120
	I0704 00:04:24.874295   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 62/120
	I0704 00:04:25.875582   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 63/120
	I0704 00:04:26.876929   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 64/120
	I0704 00:04:27.878918   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 65/120
	I0704 00:04:28.880316   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 66/120
	I0704 00:04:29.882465   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 67/120
	I0704 00:04:30.884418   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 68/120
	I0704 00:04:31.885762   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 69/120
	I0704 00:04:32.887814   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 70/120
	I0704 00:04:33.889240   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 71/120
	I0704 00:04:34.890612   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 72/120
	I0704 00:04:35.892312   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 73/120
	I0704 00:04:36.893619   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 74/120
	I0704 00:04:37.895686   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 75/120
	I0704 00:04:38.897200   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 76/120
	I0704 00:04:39.898549   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 77/120
	I0704 00:04:40.900031   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 78/120
	I0704 00:04:41.901363   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 79/120
	I0704 00:04:42.903601   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 80/120
	I0704 00:04:43.904884   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 81/120
	I0704 00:04:44.906323   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 82/120
	I0704 00:04:45.907592   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 83/120
	I0704 00:04:46.908892   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 84/120
	I0704 00:04:47.910762   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 85/120
	I0704 00:04:48.912021   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 86/120
	I0704 00:04:49.913227   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 87/120
	I0704 00:04:50.914835   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 88/120
	I0704 00:04:51.916098   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 89/120
	I0704 00:04:52.918266   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 90/120
	I0704 00:04:53.920340   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 91/120
	I0704 00:04:54.921636   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 92/120
	I0704 00:04:55.922879   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 93/120
	I0704 00:04:56.924309   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 94/120
	I0704 00:04:57.926388   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 95/120
	I0704 00:04:58.927809   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 96/120
	I0704 00:04:59.929164   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 97/120
	I0704 00:05:00.930434   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 98/120
	I0704 00:05:01.931943   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 99/120
	I0704 00:05:02.934180   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 100/120
	I0704 00:05:03.935560   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 101/120
	I0704 00:05:04.936987   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 102/120
	I0704 00:05:05.938436   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 103/120
	I0704 00:05:06.939838   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 104/120
	I0704 00:05:07.941879   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 105/120
	I0704 00:05:08.943153   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 106/120
	I0704 00:05:09.944849   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 107/120
	I0704 00:05:10.946649   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 108/120
	I0704 00:05:11.948142   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 109/120
	I0704 00:05:12.950457   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 110/120
	I0704 00:05:13.951822   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 111/120
	I0704 00:05:14.953288   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 112/120
	I0704 00:05:15.954725   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 113/120
	I0704 00:05:16.956229   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 114/120
	I0704 00:05:17.958366   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 115/120
	I0704 00:05:18.959973   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 116/120
	I0704 00:05:19.961657   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 117/120
	I0704 00:05:20.963175   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 118/120
	I0704 00:05:21.964549   61268 main.go:141] libmachine: (embed-certs-687975) Waiting for machine to stop 119/120
	I0704 00:05:22.965021   61268 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0704 00:05:22.965092   61268 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0704 00:05:22.967305   61268 out.go:177] 
	W0704 00:05:22.968798   61268 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0704 00:05:22.968818   61268 out.go:239] * 
	* 
	W0704 00:05:22.971377   61268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:05:22.972881   61268 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-687975 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975: exit status 3 (18.461247846s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:41.436221   62100 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0704 00:05:41.436249   62100 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-687975" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-995404 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-995404 --alsologtostderr -v=3: exit status 82 (2m0.512908381s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-995404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0704 00:04:30.981017   61610 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:04:30.981264   61610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:04:30.981273   61610 out.go:304] Setting ErrFile to fd 2...
	I0704 00:04:30.981277   61610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:04:30.981454   61610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:04:30.981684   61610 out.go:298] Setting JSON to false
	I0704 00:04:30.981764   61610 mustload.go:65] Loading cluster: default-k8s-diff-port-995404
	I0704 00:04:30.982091   61610 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:04:30.982190   61610 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:04:30.982368   61610 mustload.go:65] Loading cluster: default-k8s-diff-port-995404
	I0704 00:04:30.982475   61610 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:04:30.982512   61610 stop.go:39] StopHost: default-k8s-diff-port-995404
	I0704 00:04:30.982878   61610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:04:30.982929   61610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:04:30.998682   61610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0704 00:04:30.999164   61610 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:04:30.999820   61610 main.go:141] libmachine: Using API Version  1
	I0704 00:04:30.999849   61610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:04:31.000234   61610 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:04:31.002498   61610 out.go:177] * Stopping node "default-k8s-diff-port-995404"  ...
	I0704 00:04:31.004094   61610 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0704 00:04:31.004122   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:04:31.004404   61610 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0704 00:04:31.004429   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:04:31.007393   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:04:31.007782   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:04:31.007805   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:04:31.007966   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:04:31.008167   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:04:31.008330   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:04:31.008464   61610 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:04:31.108452   61610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0704 00:04:31.173909   61610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0704 00:04:31.240830   61610 main.go:141] libmachine: Stopping "default-k8s-diff-port-995404"...
	I0704 00:04:31.240871   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:04:31.242481   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Stop
	I0704 00:04:31.246771   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 0/120
	I0704 00:04:32.248186   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 1/120
	I0704 00:04:33.249874   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 2/120
	I0704 00:04:34.251249   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 3/120
	I0704 00:04:35.252650   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 4/120
	I0704 00:04:36.254626   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 5/120
	I0704 00:04:37.255953   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 6/120
	I0704 00:04:38.257409   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 7/120
	I0704 00:04:39.258891   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 8/120
	I0704 00:04:40.260263   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 9/120
	I0704 00:04:41.262671   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 10/120
	I0704 00:04:42.264010   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 11/120
	I0704 00:04:43.265331   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 12/120
	I0704 00:04:44.266584   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 13/120
	I0704 00:04:45.267969   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 14/120
	I0704 00:04:46.269967   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 15/120
	I0704 00:04:47.271268   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 16/120
	I0704 00:04:48.272747   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 17/120
	I0704 00:04:49.274332   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 18/120
	I0704 00:04:50.275550   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 19/120
	I0704 00:04:51.277918   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 20/120
	I0704 00:04:52.279354   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 21/120
	I0704 00:04:53.280901   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 22/120
	I0704 00:04:54.282216   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 23/120
	I0704 00:04:55.283759   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 24/120
	I0704 00:04:56.285661   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 25/120
	I0704 00:04:57.287053   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 26/120
	I0704 00:04:58.288609   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 27/120
	I0704 00:04:59.289892   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 28/120
	I0704 00:05:00.291642   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 29/120
	I0704 00:05:01.292927   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 30/120
	I0704 00:05:02.294424   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 31/120
	I0704 00:05:03.295907   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 32/120
	I0704 00:05:04.297394   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 33/120
	I0704 00:05:05.299126   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 34/120
	I0704 00:05:06.301161   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 35/120
	I0704 00:05:07.302580   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 36/120
	I0704 00:05:08.304100   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 37/120
	I0704 00:05:09.305549   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 38/120
	I0704 00:05:10.307028   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 39/120
	I0704 00:05:11.308373   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 40/120
	I0704 00:05:12.310583   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 41/120
	I0704 00:05:13.312182   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 42/120
	I0704 00:05:14.313833   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 43/120
	I0704 00:05:15.315258   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 44/120
	I0704 00:05:16.317498   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 45/120
	I0704 00:05:17.318986   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 46/120
	I0704 00:05:18.320570   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 47/120
	I0704 00:05:19.321933   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 48/120
	I0704 00:05:20.323832   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 49/120
	I0704 00:05:21.326132   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 50/120
	I0704 00:05:22.327641   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 51/120
	I0704 00:05:23.328959   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 52/120
	I0704 00:05:24.330391   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 53/120
	I0704 00:05:25.331765   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 54/120
	I0704 00:05:26.333831   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 55/120
	I0704 00:05:27.335410   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 56/120
	I0704 00:05:28.336894   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 57/120
	I0704 00:05:29.338228   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 58/120
	I0704 00:05:30.339896   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 59/120
	I0704 00:05:31.342279   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 60/120
	I0704 00:05:32.343597   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 61/120
	I0704 00:05:33.345150   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 62/120
	I0704 00:05:34.346497   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 63/120
	I0704 00:05:35.347993   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 64/120
	I0704 00:05:36.350053   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 65/120
	I0704 00:05:37.351610   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 66/120
	I0704 00:05:38.353084   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 67/120
	I0704 00:05:39.354344   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 68/120
	I0704 00:05:40.355785   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 69/120
	I0704 00:05:41.358104   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 70/120
	I0704 00:05:42.359450   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 71/120
	I0704 00:05:43.360994   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 72/120
	I0704 00:05:44.362610   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 73/120
	I0704 00:05:45.364295   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 74/120
	I0704 00:05:46.366545   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 75/120
	I0704 00:05:47.367825   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 76/120
	I0704 00:05:48.369225   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 77/120
	I0704 00:05:49.370870   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 78/120
	I0704 00:05:50.372379   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 79/120
	I0704 00:05:51.374489   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 80/120
	I0704 00:05:52.376335   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 81/120
	I0704 00:05:53.378435   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 82/120
	I0704 00:05:54.379848   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 83/120
	I0704 00:05:55.381496   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 84/120
	I0704 00:05:56.383812   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 85/120
	I0704 00:05:57.385241   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 86/120
	I0704 00:05:58.386815   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 87/120
	I0704 00:05:59.388427   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 88/120
	I0704 00:06:00.390024   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 89/120
	I0704 00:06:01.392382   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 90/120
	I0704 00:06:02.394984   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 91/120
	I0704 00:06:03.396559   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 92/120
	I0704 00:06:04.398095   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 93/120
	I0704 00:06:05.399574   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 94/120
	I0704 00:06:06.401748   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 95/120
	I0704 00:06:07.403351   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 96/120
	I0704 00:06:08.404933   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 97/120
	I0704 00:06:09.406504   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 98/120
	I0704 00:06:10.407902   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 99/120
	I0704 00:06:11.410275   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 100/120
	I0704 00:06:12.412005   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 101/120
	I0704 00:06:13.413528   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 102/120
	I0704 00:06:14.415008   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 103/120
	I0704 00:06:15.416530   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 104/120
	I0704 00:06:16.418454   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 105/120
	I0704 00:06:17.420025   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 106/120
	I0704 00:06:18.421589   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 107/120
	I0704 00:06:19.423054   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 108/120
	I0704 00:06:20.424581   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 109/120
	I0704 00:06:21.427124   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 110/120
	I0704 00:06:22.428691   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 111/120
	I0704 00:06:23.430168   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 112/120
	I0704 00:06:24.431787   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 113/120
	I0704 00:06:25.433661   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 114/120
	I0704 00:06:26.435842   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 115/120
	I0704 00:06:27.437504   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 116/120
	I0704 00:06:28.438915   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 117/120
	I0704 00:06:29.440595   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 118/120
	I0704 00:06:30.442433   61610 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for machine to stop 119/120
	I0704 00:06:31.443132   61610 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0704 00:06:31.443195   61610 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0704 00:06:31.445263   61610 out.go:177] 
	W0704 00:06:31.446539   61610 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0704 00:06:31.446552   61610 out.go:239] * 
	* 
	W0704 00:06:31.449040   61610 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:06:31.451136   61610 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-995404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404: exit status 3 (18.591459286s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:06:50.044247   62574 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E0704 00:06:50.044281   62574 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-995404" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-979033 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-979033 create -f testdata/busybox.yaml: exit status 1 (44.596394ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-979033" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-979033 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 6 (234.661787ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:04:43.595517   61720 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-979033" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 6 (215.415066ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:04:43.815319   61750 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-979033" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-979033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-979033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.607982662s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-979033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-979033 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-979033 describe deploy/metrics-server -n kube-system: exit status 1 (44.854953ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-979033" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-979033 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 6 (213.533097ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:06:30.680791   62508 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-979033" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739: exit status 3 (3.167956266s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:11.580253   61932 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host
	E0704 00:05:11.580276   61932 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-317739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-317739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152719501s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-317739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739: exit status 3 (3.063129139s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:20.796309   62011 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host
	E0704 00:05:20.796336   62011 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.61.109:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-317739" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975: exit status 3 (3.167845788s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:44.604233   62198 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0704 00:05:44.604257   62198 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-687975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-687975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152848678s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-687975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975: exit status 3 (3.062901015s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:05:53.820214   62261 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0704 00:05:53.820238   62261 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-687975" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (736.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m12.519979473s)

                                                
                                                
-- stdout --
	* [old-k8s-version-979033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-979033" primary control-plane node in "old-k8s-version-979033" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0704 00:06:33.256371   62670 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:06:33.257024   62670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:06:33.257073   62670 out.go:304] Setting ErrFile to fd 2...
	I0704 00:06:33.257092   62670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:06:33.257536   62670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:06:33.258346   62670 out.go:298] Setting JSON to false
	I0704 00:06:33.259262   62670 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6533,"bootTime":1720045060,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:06:33.259330   62670 start.go:139] virtualization: kvm guest
	I0704 00:06:33.261222   62670 out.go:177] * [old-k8s-version-979033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:06:33.262806   62670 notify.go:220] Checking for updates...
	I0704 00:06:33.262815   62670 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:06:33.264133   62670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:06:33.265366   62670 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:06:33.266680   62670 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:06:33.267970   62670 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:06:33.269136   62670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:06:33.270696   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:06:33.271101   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:06:33.271167   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:06:33.286235   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0704 00:06:33.286613   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:06:33.287133   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:06:33.287153   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:06:33.287545   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:06:33.287730   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:06:33.289614   62670 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0704 00:06:33.290915   62670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:06:33.291219   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:06:33.291261   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:06:33.306163   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0704 00:06:33.306524   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:06:33.306944   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:06:33.306972   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:06:33.307299   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:06:33.307480   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:06:33.344523   62670 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:06:33.345727   62670 start.go:297] selected driver: kvm2
	I0704 00:06:33.345751   62670 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:06:33.345863   62670 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:06:33.346525   62670 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:06:33.346588   62670 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:06:33.362375   62670 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:06:33.362750   62670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:06:33.362818   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:06:33.362831   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:06:33.362875   62670 start.go:340] cluster config:
	{Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:06:33.362983   62670 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:06:33.364802   62670 out.go:177] * Starting "old-k8s-version-979033" primary control-plane node in "old-k8s-version-979033" cluster
	I0704 00:06:33.365951   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:06:33.365988   62670 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0704 00:06:33.365995   62670 cache.go:56] Caching tarball of preloaded images
	I0704 00:06:33.366102   62670 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:06:33.366113   62670 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0704 00:06:33.366209   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:06:33.366394   62670 start.go:360] acquireMachinesLock for old-k8s-version-979033: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	* 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	* 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-979033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (230.419288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25: (1.725929416s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.683844872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052327683822453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=803af164-bcd9-4758-9c71-f1ac95a79ed2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.684402866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7320d1df-1c64-4585-8b0e-dd089440f4aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.684455214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7320d1df-1c64-4585-8b0e-dd089440f4aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.684486947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7320d1df-1c64-4585-8b0e-dd089440f4aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.716511253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00ebee6f-14c4-4fdc-9d76-9787657ec47d name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.716581616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00ebee6f-14c4-4fdc-9d76-9787657ec47d name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.717728196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78bebdd1-1606-4fea-8766-f1ec0722e8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.718088672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052327718062787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78bebdd1-1606-4fea-8766-f1ec0722e8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.718727575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2e9e627-28b2-4eaf-868f-cef770cb3885 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.718780674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2e9e627-28b2-4eaf-868f-cef770cb3885 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.718820098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f2e9e627-28b2-4eaf-868f-cef770cb3885 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.754312376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67691362-3817-4339-997c-30e85172bf84 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.754455269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67691362-3817-4339-997c-30e85172bf84 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.755689448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d924a3f5-c289-4978-a697-df33c6807cbd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.756127206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052327756104649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d924a3f5-c289-4978-a697-df33c6807cbd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.756752010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e7118e0-d40d-4a6f-8829-23efb9c58e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.756821198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e7118e0-d40d-4a6f-8829-23efb9c58e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.756854791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e7118e0-d40d-4a6f-8829-23efb9c58e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.794236922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8269e63c-5a08-4f7b-89f1-2fcc33ad9ae8 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.794390755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8269e63c-5a08-4f7b-89f1-2fcc33ad9ae8 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.795645275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a864130d-58cf-48e0-9a96-6d31f73c26b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.796048881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052327796019599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a864130d-58cf-48e0-9a96-6d31f73c26b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.796645966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=249bccc5-b7e5-4ae8-8530-dc5acf80fafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.796702352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=249bccc5-b7e5-4ae8-8530-dc5acf80fafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:18:47 old-k8s-version-979033 crio[644]: time="2024-07-04 00:18:47.797072482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=249bccc5-b7e5-4ae8-8530-dc5acf80fafb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054432] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041342] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731817] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.437901] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.394657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.740177] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.073688] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074920] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.184099] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.154476] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.272154] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.964143] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.063078] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.822817] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul 4 00:11] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 4 00:14] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul 4 00:16] systemd-fstab-generator[5229]: Ignoring "noauto" option for root device
	[  +0.072411] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:18:47 up 8 min,  0 users,  load average: 0.08, 0.05, 0.01
	Linux old-k8s-version-979033 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bdd6f0)
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cbfef0, 0x4f0ac20, 0xc000b2bcc0, 0x1, 0xc00009e0c0)
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000264d20, 0xc00009e0c0)
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bd2760, 0xc000bd8dc0)
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5406]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 04 00:18:45 old-k8s-version-979033 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 04 00:18:45 old-k8s-version-979033 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 04 00:18:45 old-k8s-version-979033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 04 00:18:45 old-k8s-version-979033 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 04 00:18:45 old-k8s-version-979033 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5471]: I0704 00:18:45.785998    5471 server.go:416] Version: v1.20.0
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5471]: I0704 00:18:45.786443    5471 server.go:837] Client rotation is on, will bootstrap in background
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5471]: I0704 00:18:45.789763    5471 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5471]: W0704 00:18:45.791105    5471 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 04 00:18:45 old-k8s-version-979033 kubelet[5471]: I0704 00:18:45.791516    5471 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (235.342187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-979033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (736.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404: exit status 3 (3.167438265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:06:53.212233   62778 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E0704 00:06:53.212265   62778 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-995404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-995404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153449191s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-995404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404: exit status 3 (3.062556921s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0704 00:07:02.428318   62859 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E0704 00:07:02.428356   62859 status.go:131] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-995404" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0704 00:15:20.414068   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687975 -n embed-certs-687975
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:23:59.564833719 +0000 UTC m=+5832.498071473
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-687975 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-687975 logs -n 25: (2.191712646s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.174721938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052641174582940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0baeb6ef-bea6-41bc-bd71-7c5e488c47c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.175650207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84622126-4d89-4069-9bab-38868f07d5d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.175757441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84622126-4d89-4069-9bab-38868f07d5d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.175958822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051863005696629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56bf46a96bbb2f1a3de7cf20da1dae9b715251d6673109d1a9f0f11ae81cc5f6,PodSandboxId:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051843131205792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8cfec9ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3,PodSandboxId:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051840236005665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 70139600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051832233764752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78,PodSandboxId:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051832197738948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-bfa7-f53448618
efb,},Annotations:map[string]string{io.kubernetes.container.hash: e2f0725b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864,PodSandboxId:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051828531927058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6edda731,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d,PodSandboxId:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051828464031375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: d4747eae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0,PodSandboxId:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051828440157316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,},Annotations:map[string]string{io.kubernetes.container.hash:
838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d,PodSandboxId:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051828450158559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84622126-4d89-4069-9bab-38868f07d5d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.217673143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54721155-3571-45e8-b1bc-2831885cfdf1 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.217751460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54721155-3571-45e8-b1bc-2831885cfdf1 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.219246859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=740845d1-544c-4e88-abc6-7d5be8e9ed91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.219714254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052641219688465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=740845d1-544c-4e88-abc6-7d5be8e9ed91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.220348767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16da4f0a-256e-4ded-900e-6ff68ab1b48f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.220419820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16da4f0a-256e-4ded-900e-6ff68ab1b48f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.220679595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051863005696629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56bf46a96bbb2f1a3de7cf20da1dae9b715251d6673109d1a9f0f11ae81cc5f6,PodSandboxId:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051843131205792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8cfec9ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3,PodSandboxId:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051840236005665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 70139600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051832233764752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78,PodSandboxId:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051832197738948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-bfa7-f53448618
efb,},Annotations:map[string]string{io.kubernetes.container.hash: e2f0725b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864,PodSandboxId:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051828531927058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6edda731,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d,PodSandboxId:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051828464031375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: d4747eae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0,PodSandboxId:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051828440157316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,},Annotations:map[string]string{io.kubernetes.container.hash:
838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d,PodSandboxId:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051828450158559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16da4f0a-256e-4ded-900e-6ff68ab1b48f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.270038833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6acace7a-fa54-4aff-ada0-8a6fc64330cb name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.270168244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6acace7a-fa54-4aff-ada0-8a6fc64330cb name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.271814449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e06435b5-b166-4c91-9ebb-e39a720e49af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.272329575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052641272299586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e06435b5-b166-4c91-9ebb-e39a720e49af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.273274634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea56af5e-1e07-48e1-b60f-005a437740c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.273357885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea56af5e-1e07-48e1-b60f-005a437740c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.273569209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051863005696629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56bf46a96bbb2f1a3de7cf20da1dae9b715251d6673109d1a9f0f11ae81cc5f6,PodSandboxId:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051843131205792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8cfec9ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3,PodSandboxId:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051840236005665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 70139600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051832233764752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78,PodSandboxId:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051832197738948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-bfa7-f53448618
efb,},Annotations:map[string]string{io.kubernetes.container.hash: e2f0725b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864,PodSandboxId:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051828531927058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6edda731,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d,PodSandboxId:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051828464031375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: d4747eae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0,PodSandboxId:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051828440157316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,},Annotations:map[string]string{io.kubernetes.container.hash:
838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d,PodSandboxId:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051828450158559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea56af5e-1e07-48e1-b60f-005a437740c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.315016385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6a7b80a-b613-425d-b7b9-727b9bc9508e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.315216502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6a7b80a-b613-425d-b7b9-727b9bc9508e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.316782867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bd253eb-b2eb-40fe-a333-b003a899ce06 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.317203627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052641317181199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bd253eb-b2eb-40fe-a333-b003a899ce06 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.318008147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4de885a9-ae9a-4d2c-a429-7e16e1a127f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.318085229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4de885a9-ae9a-4d2c-a429-7e16e1a127f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:01 embed-certs-687975 crio[732]: time="2024-07-04 00:24:01.318284203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051863005696629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56bf46a96bbb2f1a3de7cf20da1dae9b715251d6673109d1a9f0f11ae81cc5f6,PodSandboxId:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051843131205792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8cfec9ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3,PodSandboxId:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051840236005665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 70139600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051832233764752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78,PodSandboxId:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051832197738948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-bfa7-f53448618
efb,},Annotations:map[string]string{io.kubernetes.container.hash: e2f0725b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864,PodSandboxId:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051828531927058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6edda731,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d,PodSandboxId:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051828464031375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: d4747eae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0,PodSandboxId:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051828440157316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,},Annotations:map[string]string{io.kubernetes.container.hash:
838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d,PodSandboxId:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051828450158559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4de885a9-ae9a-4d2c-a429-7e16e1a127f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5718f2328eaa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   b368293f3bee8       storage-provisioner
	56bf46a96bbb2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   06e56aaafbfc9       busybox
	ccbd6757ef6ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e7874c8a943a5       coredns-7db6d8ff4d-2bn7d
	0a20f1a805446       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   b368293f3bee8       storage-provisioner
	0758cc11c578a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   739523f0f3056       kube-proxy-9phtm
	e2490c1548394       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   684b1cb3c06c7       etcd-embed-certs-687975
	2c26905e98271       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   3b5b194229c2c       kube-apiserver-embed-certs-687975
	49302273be8ed       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   1473c2739c681       kube-controller-manager-embed-certs-687975
	bac9db9686284       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   a9c52f9deb0ad       kube-scheduler-embed-certs-687975
	
	
	==> coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46306 - 23492 "HINFO IN 5648278252653877547.872263897006372740. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014111921s
	
	
	==> describe nodes <==
	Name:               embed-certs-687975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-687975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=embed-certs-687975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_02_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:02:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-687975
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:21:14 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:21:14 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:21:14 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:21:14 +0000   Thu, 04 Jul 2024 00:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    embed-certs-687975
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9014716dc5654ce3b2e482f446692f40
	  System UUID:                9014716d-c565-4ce3-b2e4-82f446692f40
	  Boot ID:                    fb68e1e1-c3e6-484a-ae3e-33a5a2249f14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-2bn7d                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-687975                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-687975             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-687975    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-9phtm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-687975             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-jpmsg               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-687975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-687975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-687975 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node embed-certs-687975 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-687975 event: Registered Node embed-certs-687975 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-687975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-687975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-687975 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-687975 event: Registered Node embed-certs-687975 in Controller
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051582] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.607478] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.474040] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.563580] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.200328] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.060691] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067953] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.207358] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.128405] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.304639] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.723742] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.069454] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.398550] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +4.606019] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.001423] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +4.519621] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.635377] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] <==
	{"level":"info","ts":"2024-07-04T00:10:46.122057Z","caller":"traceutil/trace.go:171","msg":"trace[1853047012] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"846.023745ms","start":"2024-07-04T00:10:45.27602Z","end":"2024-07-04T00:10:46.122043Z","steps":["trace[1853047012] 'process raft request'  (duration: 387.378473ms)","trace[1853047012] 'compare'  (duration: 457.142757ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:10:46.122117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"843.79988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-687975\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-07-04T00:10:46.130842Z","caller":"traceutil/trace.go:171","msg":"trace[491616219] range","detail":"{range_begin:/registry/minions/embed-certs-687975; range_end:; response_count:1; response_revision:587; }","duration":"852.52254ms","start":"2024-07-04T00:10:45.278306Z","end":"2024-07-04T00:10:46.130829Z","steps":["trace[491616219] 'agreement among raft nodes before linearized reading'  (duration: 843.76724ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:10:46.13091Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:45.278304Z","time spent":"852.589623ms","remote":"127.0.0.1:35122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5509,"request content":"key:\"/registry/minions/embed-certs-687975\" "}
	{"level":"warn","ts":"2024-07-04T00:10:46.131089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:45.2781Z","time spent":"852.974672ms","remote":"127.0.0.1:35122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5509,"request content":"key:\"/registry/minions/embed-certs-687975\" "}
	{"level":"warn","ts":"2024-07-04T00:10:46.132716Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:45.276013Z","time spent":"856.575784ms","remote":"127.0.0.1:35124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6291,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-687975\" mod_revision:485 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-687975\" value_size:6214 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-687975\" > >"}
	{"level":"info","ts":"2024-07-04T00:10:46.132887Z","caller":"traceutil/trace.go:171","msg":"trace[485308331] range","detail":"{range_begin:/registry/minions/embed-certs-687975; range_end:; response_count:1; response_revision:587; }","duration":"854.55651ms","start":"2024-07-04T00:10:45.278322Z","end":"2024-07-04T00:10:46.132878Z","steps":["trace[485308331] 'agreement among raft nodes before linearized reading'  (duration: 843.634652ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:10:46.132938Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:45.278317Z","time spent":"854.611964ms","remote":"127.0.0.1:35122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5509,"request content":"key:\"/registry/minions/embed-certs-687975\" "}
	{"level":"warn","ts":"2024-07-04T00:10:46.5792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.302941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250271017933787495 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:496 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-04T00:10:46.579356Z","caller":"traceutil/trace.go:171","msg":"trace[1429368606] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:633; }","duration":"433.268132ms","start":"2024-07-04T00:10:46.146076Z","end":"2024-07-04T00:10:46.579344Z","steps":["trace[1429368606] 'read index received'  (duration: 97.001861ms)","trace[1429368606] 'applied index is now lower than readState.Index'  (duration: 336.265129ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:10:46.579799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.711783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-687975\" ","response":"range_response_count:1 size:6914"}
	{"level":"info","ts":"2024-07-04T00:10:46.579921Z","caller":"traceutil/trace.go:171","msg":"trace[1277335808] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-687975; range_end:; response_count:1; response_revision:588; }","duration":"433.863855ms","start":"2024-07-04T00:10:46.146047Z","end":"2024-07-04T00:10:46.579911Z","steps":["trace[1277335808] 'agreement among raft nodes before linearized reading'  (duration: 433.406019ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:10:46.579971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:46.146034Z","time spent":"433.927465ms","remote":"127.0.0.1:35124","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":6937,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-687975\" "}
	{"level":"info","ts":"2024-07-04T00:10:46.580307Z","caller":"traceutil/trace.go:171","msg":"trace[1857357627] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"435.086248ms","start":"2024-07-04T00:10:46.14519Z","end":"2024-07-04T00:10:46.580276Z","steps":["trace[1857357627] 'process raft request'  (duration: 97.652929ms)","trace[1857357627] 'compare'  (duration: 335.905891ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:10:46.580492Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:46.145172Z","time spent":"435.243517ms","remote":"127.0.0.1:35414","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:496 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-07-04T00:10:46.582528Z","caller":"traceutil/trace.go:171","msg":"trace[2037889162] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"430.424169ms","start":"2024-07-04T00:10:46.152092Z","end":"2024-07-04T00:10:46.582516Z","steps":["trace[2037889162] 'process raft request'  (duration: 430.351956ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:10:46.582831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:10:46.152069Z","time spent":"430.687739ms","remote":"127.0.0.1:35124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6899,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-687975\" mod_revision:483 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-687975\" value_size:6831 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-687975\" > >"}
	{"level":"warn","ts":"2024-07-04T00:11:08.161934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.845432ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250271017933787670 > lease_revoke:<id:005f907b1408d1a1>","response":"size:28"}
	{"level":"info","ts":"2024-07-04T00:11:32.730377Z","caller":"traceutil/trace.go:171","msg":"trace[647980089] linearizableReadLoop","detail":"{readStateIndex:688; appliedIndex:687; }","duration":"124.018006ms","start":"2024-07-04T00:11:32.606337Z","end":"2024-07-04T00:11:32.730355Z","steps":["trace[647980089] 'read index received'  (duration: 123.736541ms)","trace[647980089] 'applied index is now lower than readState.Index'  (duration: 280.957µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:11:32.73058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.215457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-04T00:11:32.730704Z","caller":"traceutil/trace.go:171","msg":"trace[1116024264] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:631; }","duration":"124.381121ms","start":"2024-07-04T00:11:32.606309Z","end":"2024-07-04T00:11:32.73069Z","steps":["trace[1116024264] 'agreement among raft nodes before linearized reading'  (duration: 124.140039ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:11:32.730957Z","caller":"traceutil/trace.go:171","msg":"trace[975488001] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"176.214844ms","start":"2024-07-04T00:11:32.55473Z","end":"2024-07-04T00:11:32.730945Z","steps":["trace[975488001] 'process raft request'  (duration: 175.487421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:20:30.320138Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-07-04T00:20:30.333661Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":834,"took":"12.570268ms","hash":1268322157,"current-db-size-bytes":2736128,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2736128,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-04T00:20:30.333757Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268322157,"revision":834,"compact-revision":-1}
	
	
	==> kernel <==
	 00:24:01 up 14 min,  0 users,  load average: 0.27, 0.18, 0.11
	Linux embed-certs-687975 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] <==
	I0704 00:18:32.726756       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:20:31.728562       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:20:31.728743       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:20:32.729022       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:20:32.729206       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:20:32.729257       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:20:32.729144       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:20:32.729353       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:20:32.730323       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:21:32.729825       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:32.729871       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:21:32.729879       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:21:32.731123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:32.731268       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:21:32.731312       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:23:32.730732       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:23:32.731119       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:23:32.731156       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:23:32.732042       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:23:32.732121       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:23:32.732195       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] <==
	I0704 00:18:14.957134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:18:44.392676       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:18:44.967789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:19:14.400197       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:19:14.975519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:19:44.405663       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:19:44.985950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:20:14.412403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:20:14.994693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:20:44.419410       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:20:45.005458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:21:14.424809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:15.013399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:21:40.814007       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.448315ms"
	E0704 00:21:44.429901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:45.026345       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:21:51.814469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="71.519µs"
	E0704 00:22:14.435987       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:15.037042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:22:44.441128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:45.046400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:23:14.446942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:15.053712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:23:44.454366       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:45.061752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] <==
	I0704 00:10:32.381899       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:10:32.394715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.213"]
	I0704 00:10:32.435832       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:10:32.435944       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:10:32.435975       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:10:32.441122       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:10:32.441470       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:10:32.441915       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:10:32.443581       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:10:32.444740       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:10:32.444721       1 config.go:192] "Starting service config controller"
	I0704 00:10:32.445009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:10:32.446250       1 config.go:319] "Starting node config controller"
	I0704 00:10:32.447250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:10:32.544938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:10:32.545194       1 shared_informer.go:320] Caches are synced for service config
	I0704 00:10:32.547519       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] <==
	I0704 00:10:29.257935       1 serving.go:380] Generated self-signed cert in-memory
	W0704 00:10:31.709203       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0704 00:10:31.709351       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0704 00:10:31.709452       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0704 00:10:31.709477       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0704 00:10:31.781007       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0704 00:10:31.781220       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:10:31.791818       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0704 00:10:31.791873       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0704 00:10:31.792495       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0704 00:10:31.792594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0704 00:10:31.893865       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:21:27 embed-certs-687975 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:21:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:21:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:21:40 embed-certs-687975 kubelet[943]: E0704 00:21:40.796532     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:21:51 embed-certs-687975 kubelet[943]: E0704 00:21:51.796509     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:22:02 embed-certs-687975 kubelet[943]: E0704 00:22:02.795815     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:22:14 embed-certs-687975 kubelet[943]: E0704 00:22:14.796455     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:22:27 embed-certs-687975 kubelet[943]: E0704 00:22:27.824881     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:22:27 embed-certs-687975 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:22:27 embed-certs-687975 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:22:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:22:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:22:29 embed-certs-687975 kubelet[943]: E0704 00:22:29.795143     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:22:42 embed-certs-687975 kubelet[943]: E0704 00:22:42.795892     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:22:55 embed-certs-687975 kubelet[943]: E0704 00:22:55.796475     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:23:08 embed-certs-687975 kubelet[943]: E0704 00:23:08.796369     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:23:20 embed-certs-687975 kubelet[943]: E0704 00:23:20.796772     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:23:27 embed-certs-687975 kubelet[943]: E0704 00:23:27.819765     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:23:27 embed-certs-687975 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:23:27 embed-certs-687975 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:23:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:23:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:23:33 embed-certs-687975 kubelet[943]: E0704 00:23:33.795846     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:23:44 embed-certs-687975 kubelet[943]: E0704 00:23:44.796028     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:23:55 embed-certs-687975 kubelet[943]: E0704 00:23:55.796350     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	
	
	==> storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] <==
	I0704 00:10:32.372675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0704 00:11:02.381272       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] <==
	I0704 00:11:03.122539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:11:03.136448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:11:03.136845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:11:20.543458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:11:20.543693       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0!
	I0704 00:11:20.543752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2336027a-6017-42cd-8bce-095b0142a30c", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0 became leader
	I0704 00:11:20.644523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-687975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jpmsg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg: exit status 1 (64.743522ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jpmsg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0704 00:16:17.046506   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:24:37.754350139 +0000 UTC m=+5870.687587893
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-995404 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-995404 logs -n 25: (2.243780369s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.441911446Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b335daca-ead5-42d5-96a3-245d38bd2d1a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051881145720459,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:11:13.250071018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmq4s,Uid:f9725f92-7635-4111-bf63-66dbef0155b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172005
1881142719643,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:11:13.250072527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:337b7ab9195774a213d82a06c320f8a973866c1e5672285f4319b7b4fe8f5987,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-v8qw2,Uid:d6a67fb7-5004-4c93-9023-fc470f786ae9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051879336731359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-v8qw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a67fb7-5004-4c93-9023-fc470f786ae9,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04
T00:11:13.250060317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3adc3ff6-282f-4f53-879f-c73d71c76b74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051873571365042,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-04T00:11:13.250068866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&PodSandboxMetadata{Name:kube-proxy-pplqq,Uid:3b74a8c2-1e91-449d-9be9-8891459dccbc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051873569436568,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9-8891459dccbc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-07-04T00:11:13.250073887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-995404,Uid:eccd3511daaf18b1d48cae4d95632212,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868782718855,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48cae4d95632212,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eccd3511daaf18b1d48cae4d95632212,kubernetes.io/config.seen: 2024-07-04T00:11:08.255483074Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-995404,Uid:
0d1f278de836ff491a91e8c80936294a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868773795779,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c80936294a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.164:8444,kubernetes.io/config.hash: 0d1f278de836ff491a91e8c80936294a,kubernetes.io/config.seen: 2024-07-04T00:11:08.255476949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-995404,Uid:f74b1039c6d802b380d3b54865ba5da9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868766053063,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.164:2379,kubernetes.io/config.hash: f74b1039c6d802b380d3b54865ba5da9,kubernetes.io/config.seen: 2024-07-04T00:11:08.296830252Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-995404,Uid:21f4f5d0c28792012b764ca566c3a613,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868760998713,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a613,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 21f4f5d0c28792012b764ca566c3a613,kubernetes.io/config.seen: 2024-07-04T00:11:08.255481905Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bc5cf3f8-f681-447f-a17e-7b1a13a22429 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.442611058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f48bf8b5-5eb8-483f-9124-1e447b5f0167 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.442662078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f48bf8b5-5eb8-483f-9124-1e447b5f0167 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.442841758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f48bf8b5-5eb8-483f-9124-1e447b5f0167 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.481978643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2b1f934-d9e7-43ea-b590-1c48531f7e25 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.482050585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2b1f934-d9e7-43ea-b590-1c48531f7e25 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.483270398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe54f2a1-3888-4c64-b07f-ffabcdfa0aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.483659987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052679483637870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe54f2a1-3888-4c64-b07f-ffabcdfa0aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.484773886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10914be6-7041-41f9-af47-2a1d7d6aee7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.484827525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10914be6-7041-41f9-af47-2a1d7d6aee7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.485488657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10914be6-7041-41f9-af47-2a1d7d6aee7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.525174611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98ae553c-968b-4955-aa86-f8b799240e71 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.525367919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98ae553c-968b-4955-aa86-f8b799240e71 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.526518122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93257561-77c6-488a-a6e4-cdd34a8609c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.527062995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052679527033628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93257561-77c6-488a-a6e4-cdd34a8609c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.527809474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f4a108b-a19a-4200-b9c3-bd98a4efa02a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.527862993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f4a108b-a19a-4200-b9c3-bd98a4efa02a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.528187831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f4a108b-a19a-4200-b9c3-bd98a4efa02a name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.566590398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7233d75-6cda-4e98-9a81-487185363b8b name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.566665532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7233d75-6cda-4e98-9a81-487185363b8b name=/runtime.v1.RuntimeService/Version
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.568544220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6989409e-b094-4586-be08-06040bd09df4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.569575109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052679569544181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6989409e-b094-4586-be08-06040bd09df4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.570676380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb6dba40-3638-426d-9057-e68bd14fff9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.570732562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb6dba40-3638-426d-9057-e68bd14fff9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:24:39 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:24:39.570918313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb6dba40-3638-426d-9057-e68bd14fff9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	916f2ecfce3c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   8a08aafe4e1f0       storage-provisioner
	4d37d3ca0beb9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b5aa2a4c02f49       busybox
	7dc19c0e5a3a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   92c3127464abd       coredns-7db6d8ff4d-jmq4s
	ee9747ce58de5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8a08aafe4e1f0       storage-provisioner
	54ecbdc0a4753       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   53881affa9536       kube-proxy-pplqq
	06f36aa92a09f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   81234ffa82235       kube-scheduler-default-k8s-diff-port-995404
	f69caa2d9d0a4       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   69ef20a0f5c69       kube-apiserver-default-k8s-diff-port-995404
	13a8615c20433       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   3f9060ae8aaf6       kube-controller-manager-default-k8s-diff-port-995404
	5629c8085daeb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   38443a29b6ba0       etcd-default-k8s-diff-port-995404
	
	
	==> coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46260 - 34624 "HINFO IN 4350776552710244963.6388471656172094076. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010614342s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-995404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-995404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=default-k8s-diff-port-995404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_03_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-995404
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:21:57 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:21:57 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:21:57 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:21:57 +0000   Thu, 04 Jul 2024 00:11:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    default-k8s-diff-port-995404
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f65ad4585c17430b8e254e05e9233a59
	  System UUID:                f65ad458-5c17-430b-8e25-4e05e9233a59
	  Boot ID:                    ce7ef7a0-7835-4022-9e53-76168d47dc81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-jmq4s                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-995404                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-995404             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-995404    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-pplqq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-995404             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-v8qw2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-995404 event: Registered Node default-k8s-diff-port-995404 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-995404 event: Registered Node default-k8s-diff-port-995404 in Controller
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053998] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.011315] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.511832] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.629618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.197135] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059204] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051096] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.222028] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.130959] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[Jul 4 00:11] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.845430] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.073494] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.227420] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +5.610717] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.973525] systemd-fstab-generator[1553]: Ignoring "noauto" option for root device
	[  +3.752827] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.088814] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] <==
	{"level":"info","ts":"2024-07-04T00:11:09.473681Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","added-peer-id":"80a63a57d726c697","added-peer-peer-urls":["https://192.168.50.164:2380"]}
	{"level":"info","ts":"2024-07-04T00:11:09.473854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:11:09.473908Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:11:09.483127Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2024-07-04T00:11:09.483164Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2024-07-04T00:11:09.483074Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-04T00:11:09.493334Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"80a63a57d726c697","initial-advertise-peer-urls":["https://192.168.50.164:2380"],"listen-peer-urls":["https://192.168.50.164:2380"],"advertise-client-urls":["https://192.168.50.164:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.164:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-04T00:11:09.493395Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-04T00:11:10.851496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-04T00:11:10.851569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-04T00:11:10.851608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgPreVoteResp from 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2024-07-04T00:11:10.851623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became candidate at term 3"}
	{"level":"info","ts":"2024-07-04T00:11:10.851631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgVoteResp from 80a63a57d726c697 at term 3"}
	{"level":"info","ts":"2024-07-04T00:11:10.851642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became leader at term 3"}
	{"level":"info","ts":"2024-07-04T00:11:10.851656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80a63a57d726c697 elected leader 80a63a57d726c697 at term 3"}
	{"level":"info","ts":"2024-07-04T00:11:10.85356Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:11:10.855623Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-04T00:11:10.853512Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"80a63a57d726c697","local-member-attributes":"{Name:default-k8s-diff-port-995404 ClientURLs:[https://192.168.50.164:2379]}","request-path":"/0/members/80a63a57d726c697/attributes","cluster-id":"d41e51b80202c3fb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-04T00:11:10.859362Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:11:10.867705Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.164:2379"}
	{"level":"info","ts":"2024-07-04T00:11:10.867795Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-04T00:11:10.867805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-04T00:21:10.904615Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":882}
	{"level":"info","ts":"2024-07-04T00:21:10.915902Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":882,"took":"10.949977ms","hash":3630692785,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2826240,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-04T00:21:10.915997Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3630692785,"revision":882,"compact-revision":-1}
	
	
	==> kernel <==
	 00:24:39 up 13 min,  0 users,  load average: 0.34, 0.23, 0.13
	Linux default-k8s-diff-port-995404 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] <==
	I0704 00:19:13.408911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:21:12.409494       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:12.409609       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:21:13.410394       1 handler_proxy.go:93] no RequestInfo found in the context
	W0704 00:21:13.410425       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:13.410639       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:21:13.410667       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0704 00:21:13.410732       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:21:13.411965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:22:13.411769       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:22:13.412179       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:22:13.412219       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:22:13.412268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:22:13.412367       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:22:13.414141       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:24:13.413342       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:24:13.413439       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:24:13.413452       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:24:13.414255       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:24:13.414342       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:24:13.414642       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] <==
	I0704 00:18:56.141706       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:19:25.671799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:19:26.149625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:19:55.676823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:19:56.159782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:20:25.682689       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:20:26.168280       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:20:55.687552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:20:56.176479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:21:25.692845       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:26.184300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:21:55.697992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:56.193884       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:22:25.706135       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:26.202912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:22:31.351608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="2.579612ms"
	I0704 00:22:43.354032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.135µs"
	E0704 00:22:55.710761       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:56.211910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:23:25.715475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:26.220661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:23:55.722681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:56.232439       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:24:25.729532       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:24:26.243219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] <==
	I0704 00:11:13.979690       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:11:13.997406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.164"]
	I0704 00:11:14.065614       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:11:14.065658       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:11:14.065680       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:11:14.077726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:11:14.078165       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:11:14.078381       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:11:14.079715       1 config.go:192] "Starting service config controller"
	I0704 00:11:14.079850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:11:14.079938       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:11:14.080072       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:11:14.080758       1 config.go:319] "Starting node config controller"
	I0704 00:11:14.082560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:11:14.180236       1 shared_informer.go:320] Caches are synced for service config
	I0704 00:11:14.180365       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:11:14.183245       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] <==
	I0704 00:11:10.366932       1 serving.go:380] Generated self-signed cert in-memory
	W0704 00:11:12.397921       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0704 00:11:12.397969       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0704 00:11:12.397983       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0704 00:11:12.397990       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0704 00:11:12.431506       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0704 00:11:12.431554       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:11:12.437030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0704 00:11:12.437214       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0704 00:11:12.437213       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0704 00:11:12.437473       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0704 00:11:12.539500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:22:08 default-k8s-diff-port-995404 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:22:16 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:16.355289     945 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:22:16 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:16.355382     945 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:22:16 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:16.355608     945 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd2s7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-v8qw2_kube-system(d6a67fb7-5004-4c93-9023-fc470f786ae9): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 04 00:22:16 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:16.355651     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:22:31 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:31.332032     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:22:43 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:43.330871     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:22:55 default-k8s-diff-port-995404 kubelet[945]: E0704 00:22:55.331424     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:23:08 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:08.351343     945 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:23:08 default-k8s-diff-port-995404 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:23:08 default-k8s-diff-port-995404 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:23:08 default-k8s-diff-port-995404 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:23:08 default-k8s-diff-port-995404 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:23:09 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:09.334360     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:23:22 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:22.331328     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:23:34 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:34.331468     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:23:47 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:47.331681     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:23:58 default-k8s-diff-port-995404 kubelet[945]: E0704 00:23:58.332207     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:24:08 default-k8s-diff-port-995404 kubelet[945]: E0704 00:24:08.352243     945 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:24:08 default-k8s-diff-port-995404 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:24:08 default-k8s-diff-port-995404 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:24:08 default-k8s-diff-port-995404 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:24:08 default-k8s-diff-port-995404 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:24:13 default-k8s-diff-port-995404 kubelet[945]: E0704 00:24:13.331236     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:24:28 default-k8s-diff-port-995404 kubelet[945]: E0704 00:24:28.332545     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	
	
	==> storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] <==
	I0704 00:11:44.671386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:11:44.689427       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:11:44.689523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:12:02.101222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:12:02.101613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0!
	I0704 00:12:02.101700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b744336-4986-4a58-8c08-ba78b534b80d", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0 became leader
	I0704 00:12:02.202653       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0!
	
	
	==> storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] <==
	I0704 00:11:13.970856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0704 00:11:43.973553       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-v8qw2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2: exit status 1 (68.478164ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-v8qw2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-317739 -n no-preload-317739
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:25:58.841688543 +0000 UTC m=+5951.774926306
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-317739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-317739 logs -n 25: (2.351310301s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.535792017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052760535707098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d275f1f-a2b4-4410-b6c7-00b3cb030d30 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.536835242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b235149e-b784-4caa-824e-4813c20d95dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.536907039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b235149e-b784-4caa-824e-4813c20d95dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.537178279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b235149e-b784-4caa-824e-4813c20d95dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.585622738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bde0ad4-5846-4cad-baf8-81439f49605e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.585703607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bde0ad4-5846-4cad-baf8-81439f49605e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.586863535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97bf1389-8f76-465a-bc26-c2e3f752d66b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.587229811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052760587207352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97bf1389-8f76-465a-bc26-c2e3f752d66b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.587940015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c324cf88-0e0f-4a23-83c8-db39bd4890a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.587991237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c324cf88-0e0f-4a23-83c8-db39bd4890a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.588200193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c324cf88-0e0f-4a23-83c8-db39bd4890a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.634494451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eec19e2e-a066-42b5-95d3-be2b26f187c3 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.634565794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eec19e2e-a066-42b5-95d3-be2b26f187c3 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.635732068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24f6d11a-67d2-4349-8579-7bce17e33c50 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.636062640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052760636036211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24f6d11a-67d2-4349-8579-7bce17e33c50 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.636699710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c207f0a-b60b-4566-8db8-36ab216bd9b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.636752347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c207f0a-b60b-4566-8db8-36ab216bd9b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.636950038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c207f0a-b60b-4566-8db8-36ab216bd9b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.676492116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8403c5e8-79ea-4b13-a08d-df941b2cc3b5 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.676563850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8403c5e8-79ea-4b13-a08d-df941b2cc3b5 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.677715688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6ec7a31-d9cf-471a-b3d5-5c62c20ce6b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.678037863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052760678018372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6ec7a31-d9cf-471a-b3d5-5c62c20ce6b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.678506161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fba25209-cdda-4f54-824f-c475c1d73ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.678558684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fba25209-cdda-4f54-824f-c475c1d73ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:26:00 no-preload-317739 crio[728]: time="2024-07-04 00:26:00.678753916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fba25209-cdda-4f54-824f-c475c1d73ccf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	371d42757ac36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f5c17ce5c643f       coredns-7db6d8ff4d-cxq59
	480bfbc294ac7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   00f3815cdb9f3       coredns-7db6d8ff4d-qnrtm
	889c5e0513c8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8e763d22e7562       storage-provisioner
	2b7e00135c847       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   b1e9e2f510050       kube-proxy-xxfrd
	fa9cbb6b523ab       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   117db7ef072f5       kube-scheduler-no-preload-317739
	ed67711567408       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   6b88f294de11b       kube-controller-manager-no-preload-317739
	c3fcbe487cac0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   44c290d680031       etcd-no-preload-317739
	3b5a2a6c13e28       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   1045b0af6be22       kube-apiserver-no-preload-317739
	0df936d7e00e9       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   14 minutes ago      Exited              kube-apiserver            1                   d8988b536bf32       kube-apiserver-no-preload-317739
	
	
	==> coredns [371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-317739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-317739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=no-preload-317739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:16:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-317739
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:25:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:22:05 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:22:05 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:22:05 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:22:05 +0000   Thu, 04 Jul 2024 00:16:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.109
	  Hostname:    no-preload-317739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfb1cdef80504e9e81cd486f42ed0de7
	  System UUID:                dfb1cdef-8050-4e9e-81cd-486f42ed0de7
	  Boot ID:                    5289cb08-edbf-4259-8b56-94051faf5bf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cxq59                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-qnrtm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-317739                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-317739             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-317739    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-xxfrd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-317739             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-t28ff              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node no-preload-317739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node no-preload-317739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node no-preload-317739 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-317739 event: Registered Node no-preload-317739 in Controller
	
	
	==> dmesg <==
	[  +0.054330] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.830672] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543725] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.426757] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.546439] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.128183] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.201597] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.197764] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.351520] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +16.972251] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.056786] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874245] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.656372] kauditd_printk_skb: 100 callbacks suppressed
	[ +11.272056] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 4 00:16] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.201359] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.715801] systemd-fstab-generator[4015]: Ignoring "noauto" option for root device
	[  +6.561461] systemd-fstab-generator[4343]: Ignoring "noauto" option for root device
	[  +0.088091] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.850119] systemd-fstab-generator[4566]: Ignoring "noauto" option for root device
	[  +0.152562] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 4 00:17] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352] <==
	{"level":"info","ts":"2024-07-04T00:16:34.186858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 switched to configuration voters=(14787693241743055880)"}
	{"level":"info","ts":"2024-07-04T00:16:34.187201Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a5c5ca57b3a339f","local-member-id":"cd3870a7a18a1c08","added-peer-id":"cd3870a7a18a1c08","added-peer-peer-urls":["https://192.168.61.109:2380"]}
	{"level":"info","ts":"2024-07-04T00:16:34.196148Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-04T00:16:34.196412Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.109:2380"}
	{"level":"info","ts":"2024-07-04T00:16:34.196457Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.109:2380"}
	{"level":"info","ts":"2024-07-04T00:16:34.205604Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cd3870a7a18a1c08","initial-advertise-peer-urls":["https://192.168.61.109:2380"],"listen-peer-urls":["https://192.168.61.109:2380"],"advertise-client-urls":["https://192.168.61.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-04T00:16:34.205724Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-04T00:16:34.522438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-04T00:16:34.523198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-04T00:16:34.523275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 received MsgPreVoteResp from cd3870a7a18a1c08 at term 1"}
	{"level":"info","ts":"2024-07-04T00:16:34.523376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 became candidate at term 2"}
	{"level":"info","ts":"2024-07-04T00:16:34.523413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 received MsgVoteResp from cd3870a7a18a1c08 at term 2"}
	{"level":"info","ts":"2024-07-04T00:16:34.523469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd3870a7a18a1c08 became leader at term 2"}
	{"level":"info","ts":"2024-07-04T00:16:34.523503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd3870a7a18a1c08 elected leader cd3870a7a18a1c08 at term 2"}
	{"level":"info","ts":"2024-07-04T00:16:34.52705Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.531608Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cd3870a7a18a1c08","local-member-attributes":"{Name:no-preload-317739 ClientURLs:[https://192.168.61.109:2379]}","request-path":"/0/members/cd3870a7a18a1c08/attributes","cluster-id":"3a5c5ca57b3a339f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-04T00:16:34.531685Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:16:34.532251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:16:34.533294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a5c5ca57b3a339f","local-member-id":"cd3870a7a18a1c08","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.533459Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.533522Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.534489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-04T00:16:34.536828Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.109:2379"}
	{"level":"info","ts":"2024-07-04T00:16:34.537407Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-04T00:16:34.604272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:26:01 up 14 min,  0 users,  load average: 0.08, 0.13, 0.12
	Linux no-preload-317739 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815] <==
	W0704 00:16:28.060128       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.091799       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.171952       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.255051       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.302726       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.305304       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.350681       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.362447       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.396780       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.441209       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.463511       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.498297       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.558402       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.558422       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.637086       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.665206       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.700192       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.784796       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.834037       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.866063       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.882702       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.024565       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.053183       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.179422       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.381097       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7] <==
	I0704 00:19:56.026641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:21:36.356928       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:36.357167       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:21:37.357463       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:37.357560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:21:37.357587       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:21:37.357665       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:21:37.357741       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:21:37.359169       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:22:37.358415       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:22:37.358508       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:22:37.358522       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:22:37.359728       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:22:37.359795       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:22:37.359819       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:24:37.358884       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:24:37.359040       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:24:37.359063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:24:37.360141       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:24:37.360299       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:24:37.360423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9] <==
	I0704 00:20:23.530806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:20:52.976167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:20:53.547811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:21:22.981416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:23.557054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:21:52.988425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:21:53.567306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:22:22.994569       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:23.583894       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:22:49.249200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="288.166µs"
	E0704 00:22:53.000040       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:22:53.593272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:23:03.253025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="177.89µs"
	E0704 00:23:23.006925       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:23.603060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:23:53.014694       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:23:53.612224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:24:23.022543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:24:23.622048       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:24:53.027819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:24:53.632512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:25:23.032939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:25:23.641195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:25:53.039569       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:25:53.651109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0] <==
	I0704 00:16:55.002890       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:16:55.018002       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.109"]
	I0704 00:16:55.179464       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:16:55.181169       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:16:55.182480       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:16:55.197005       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:16:55.197561       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:16:55.197622       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:16:55.199293       1 config.go:192] "Starting service config controller"
	I0704 00:16:55.199423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:16:55.199486       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:16:55.199511       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:16:55.200422       1 config.go:319] "Starting node config controller"
	I0704 00:16:55.200751       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:16:55.301430       1 shared_informer.go:320] Caches are synced for node config
	I0704 00:16:55.301458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:16:55.301483       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035] <==
	W0704 00:16:36.492010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:36.492113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:36.492304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0704 00:16:36.492496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0704 00:16:36.492580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0704 00:16:36.493419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0704 00:16:36.493161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0704 00:16:36.493504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0704 00:16:36.493635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:36.493646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:36.495578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0704 00:16:36.495614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0704 00:16:36.495671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0704 00:16:36.495709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0704 00:16:36.505844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0704 00:16:36.506044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0704 00:16:37.414809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:37.414855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:37.472230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0704 00:16:37.472465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0704 00:16:37.481029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:37.481087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:38.015434       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0704 00:16:38.015485       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0704 00:16:39.967523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:23:39 no-preload-317739 kubelet[4350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:23:39 no-preload-317739 kubelet[4350]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:23:39 no-preload-317739 kubelet[4350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:23:39 no-preload-317739 kubelet[4350]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:23:43 no-preload-317739 kubelet[4350]: E0704 00:23:43.230527    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:23:54 no-preload-317739 kubelet[4350]: E0704 00:23:54.230991    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:24:07 no-preload-317739 kubelet[4350]: E0704 00:24:07.230869    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:24:21 no-preload-317739 kubelet[4350]: E0704 00:24:21.231437    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:24:33 no-preload-317739 kubelet[4350]: E0704 00:24:33.230934    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:24:39 no-preload-317739 kubelet[4350]: E0704 00:24:39.244683    4350 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:24:39 no-preload-317739 kubelet[4350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:24:39 no-preload-317739 kubelet[4350]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:24:39 no-preload-317739 kubelet[4350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:24:39 no-preload-317739 kubelet[4350]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:24:46 no-preload-317739 kubelet[4350]: E0704 00:24:46.230540    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:25:01 no-preload-317739 kubelet[4350]: E0704 00:25:01.231275    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:25:14 no-preload-317739 kubelet[4350]: E0704 00:25:14.230260    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:25:29 no-preload-317739 kubelet[4350]: E0704 00:25:29.230698    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:25:39 no-preload-317739 kubelet[4350]: E0704 00:25:39.243507    4350 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:25:39 no-preload-317739 kubelet[4350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:25:39 no-preload-317739 kubelet[4350]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:25:39 no-preload-317739 kubelet[4350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:25:39 no-preload-317739 kubelet[4350]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:25:42 no-preload-317739 kubelet[4350]: E0704 00:25:42.230040    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:25:55 no-preload-317739 kubelet[4350]: E0704 00:25:55.232036    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	
	
	==> storage-provisioner [889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d] <==
	I0704 00:16:55.601796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:16:55.615242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:16:55.615438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:16:55.631043       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:16:55.631222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6!
	I0704 00:16:55.631805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"313ad966-e564-4b85-8ab5-68cd73b1d89f", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6 became leader
	I0704 00:16:55.731494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-317739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-t28ff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff: exit status 1 (71.919975ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-t28ff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:18:57.357594   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:21:17.046566   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:23:57.358047   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:24:20.103240   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:26:17.046429   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (228.541229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-979033" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (221.496898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25: (1.701207041s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.296871653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052871296836782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=908488de-b835-438b-848b-44c4d13eaee3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.297454467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=740b6368-6bd0-4305-b1ac-087d0b0cba5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.297522402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=740b6368-6bd0-4305-b1ac-087d0b0cba5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.297578110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=740b6368-6bd0-4305-b1ac-087d0b0cba5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.334543696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6528910c-108c-4425-b5f1-7978770fa295 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.334657646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6528910c-108c-4425-b5f1-7978770fa295 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.336077795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35c2b675-0acb-418c-9a0f-00b046d8f8cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.336543169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052871336517885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35c2b675-0acb-418c-9a0f-00b046d8f8cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.337096186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=267d80f6-f9d0-4470-928c-1fad2cc0e4d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.337148515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=267d80f6-f9d0-4470-928c-1fad2cc0e4d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.337180613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=267d80f6-f9d0-4470-928c-1fad2cc0e4d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.372425452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbf4d5ae-27fe-43a6-b112-da1792af174c name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.372505354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbf4d5ae-27fe-43a6-b112-da1792af174c name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.373602788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97df93ad-4463-4f3c-a8e2-2f336a289e93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.373987090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052871373960911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97df93ad-4463-4f3c-a8e2-2f336a289e93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.374532254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2fdf689-fe2e-49d6-b0e8-7ea0f8879712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.374624423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2fdf689-fe2e-49d6-b0e8-7ea0f8879712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.374660901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2fdf689-fe2e-49d6-b0e8-7ea0f8879712 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.408550167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51742a44-b62b-4424-b0da-9f8bc437316e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.408622899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51742a44-b62b-4424-b0da-9f8bc437316e name=/runtime.v1.RuntimeService/Version
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.410138150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2db56c6d-0f10-4e06-8ecc-924b8ab0dc6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.410662840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052871410635794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2db56c6d-0f10-4e06-8ecc-924b8ab0dc6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.411500582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=899c64bb-693a-482e-88ec-b7315b827fd8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.411564422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=899c64bb-693a-482e-88ec-b7315b827fd8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:27:51 old-k8s-version-979033 crio[644]: time="2024-07-04 00:27:51.411600255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=899c64bb-693a-482e-88ec-b7315b827fd8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054432] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041342] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731817] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.437901] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.394657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.740177] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.073688] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074920] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.184099] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.154476] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.272154] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.964143] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.063078] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.822817] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul 4 00:11] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 4 00:14] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul 4 00:16] systemd-fstab-generator[5229]: Ignoring "noauto" option for root device
	[  +0.072411] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:27:51 up 17 min,  0 users,  load average: 0.01, 0.02, 0.00
	Linux old-k8s-version-979033 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0005b3f20, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b2ea80, 0x24, 0x0, ...)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: net.(*Dialer).DialContext(0xc000c5a3c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b2ea80, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c688c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b2ea80, 0x24, 0x60, 0x7efc285e97c0, 0x118, ...)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: net/http.(*Transport).dial(0xc000b18640, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b2ea80, 0x24, 0x0, 0xc0009b3960, 0x0, ...)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: net/http.(*Transport).dialConn(0xc000b18640, 0x4f7fe00, 0xc000052030, 0x0, 0xc00037e300, 0x5, 0xc000b2ea80, 0x24, 0x0, 0xc0007efc20, ...)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: net/http.(*Transport).dialConnFor(0xc000b18640, 0xc000659c30)
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]: created by net/http.(*Transport).queueForDial
	Jul 04 00:27:46 old-k8s-version-979033 kubelet[6408]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 04 00:27:46 old-k8s-version-979033 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 04 00:27:46 old-k8s-version-979033 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 04 00:27:47 old-k8s-version-979033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 04 00:27:47 old-k8s-version-979033 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 04 00:27:47 old-k8s-version-979033 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 04 00:27:47 old-k8s-version-979033 kubelet[6417]: I0704 00:27:47.507510    6417 server.go:416] Version: v1.20.0
	Jul 04 00:27:47 old-k8s-version-979033 kubelet[6417]: I0704 00:27:47.507873    6417 server.go:837] Client rotation is on, will bootstrap in background
	Jul 04 00:27:47 old-k8s-version-979033 kubelet[6417]: I0704 00:27:47.510057    6417 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 04 00:27:47 old-k8s-version-979033 kubelet[6417]: W0704 00:27:47.511206    6417 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 04 00:27:47 old-k8s-version-979033 kubelet[6417]: I0704 00:27:47.511393    6417 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (227.513055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-979033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (451.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687975 -n embed-certs-687975
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:31:32.844986795 +0000 UTC m=+6285.778224559
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-687975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-687975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.282µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-687975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-687975 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-687975 logs -n 25: (1.463616534s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:29 UTC | 04 Jul 24 00:29 UTC |
	| start   | -p newest-cni-791847 --memory=2200 --alsologtostderr   | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:29 UTC | 04 Jul 24 00:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC | 04 Jul 24 00:30 UTC |
	| start   | -p auto-676605 --memory=3072                           | auto-676605                  | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-791847             | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC | 04 Jul 24 00:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-791847                                   | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC | 04 Jul 24 00:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-791847                  | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC | 04 Jul 24 00:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-791847 --memory=2200 --alsologtostderr   | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:30:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:30:59.550666   70275 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:30:59.550949   70275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:30:59.550959   70275 out.go:304] Setting ErrFile to fd 2...
	I0704 00:30:59.550963   70275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:30:59.551142   70275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:30:59.551745   70275 out.go:298] Setting JSON to false
	I0704 00:30:59.552726   70275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8000,"bootTime":1720045060,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:30:59.552789   70275 start.go:139] virtualization: kvm guest
	I0704 00:30:59.555150   70275 out.go:177] * [newest-cni-791847] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:30:59.556708   70275 notify.go:220] Checking for updates...
	I0704 00:30:59.556759   70275 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:30:59.558376   70275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:30:59.559702   70275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:30:59.561136   70275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:30:59.562579   70275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:30:59.563925   70275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:30:59.565923   70275 config.go:182] Loaded profile config "newest-cni-791847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:30:59.566586   70275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:30:59.566690   70275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:30:59.581720   70275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0704 00:30:59.582191   70275 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:30:59.582835   70275 main.go:141] libmachine: Using API Version  1
	I0704 00:30:59.582851   70275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:30:59.583185   70275 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:30:59.583371   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:59.583627   70275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:30:59.584053   70275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:30:59.584092   70275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:30:59.599989   70275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0704 00:30:59.600482   70275 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:30:59.601004   70275 main.go:141] libmachine: Using API Version  1
	I0704 00:30:59.601031   70275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:30:59.601378   70275 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:30:59.601560   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:59.641354   70275 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:30:59.643073   70275 start.go:297] selected driver: kvm2
	I0704 00:30:59.643095   70275 start.go:901] validating driver "kvm2" against &{Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:30:59.643279   70275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:30:59.644353   70275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:30:59.644433   70275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:30:59.660391   70275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:30:59.660885   70275 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0704 00:30:59.660960   70275 cni.go:84] Creating CNI manager for ""
	I0704 00:30:59.660977   70275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:30:59.661031   70275 start.go:340] cluster config:
	{Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:30:59.661189   70275 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:30:59.663441   70275 out.go:177] * Starting "newest-cni-791847" primary control-plane node in "newest-cni-791847" cluster
	I0704 00:30:59.664878   70275 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:30:59.664945   70275 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:30:59.664954   70275 cache.go:56] Caching tarball of preloaded images
	I0704 00:30:59.665063   70275 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:30:59.665076   70275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:30:59.665214   70275 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/config.json ...
	I0704 00:30:59.665458   70275 start.go:360] acquireMachinesLock for newest-cni-791847: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:30:59.665524   70275 start.go:364] duration metric: took 37.519µs to acquireMachinesLock for "newest-cni-791847"
	I0704 00:30:59.665542   70275 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:30:59.665559   70275 fix.go:54] fixHost starting: 
	I0704 00:30:59.665946   70275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:30:59.665990   70275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:30:59.682131   70275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0704 00:30:59.682585   70275 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:30:59.683138   70275 main.go:141] libmachine: Using API Version  1
	I0704 00:30:59.683180   70275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:30:59.683587   70275 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:30:59.683787   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:59.683941   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetState
	I0704 00:30:59.685676   70275 fix.go:112] recreateIfNeeded on newest-cni-791847: state=Stopped err=<nil>
	I0704 00:30:59.685715   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	W0704 00:30:59.685858   70275 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:30:59.687974   70275 out.go:177] * Restarting existing kvm2 VM for "newest-cni-791847" ...
	I0704 00:30:57.773612   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:57.774359   69888 main.go:141] libmachine: (auto-676605) Found IP for machine: 192.168.61.17
	I0704 00:30:57.774384   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has current primary IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:57.774390   69888 main.go:141] libmachine: (auto-676605) Reserving static IP address...
	I0704 00:30:57.774818   69888 main.go:141] libmachine: (auto-676605) DBG | unable to find host DHCP lease matching {name: "auto-676605", mac: "52:54:00:d6:ba:b5", ip: "192.168.61.17"} in network mk-auto-676605
	I0704 00:30:57.871535   69888 main.go:141] libmachine: (auto-676605) DBG | Getting to WaitForSSH function...
	I0704 00:30:57.871561   69888 main.go:141] libmachine: (auto-676605) Reserved static IP address: 192.168.61.17
	I0704 00:30:57.871573   69888 main.go:141] libmachine: (auto-676605) Waiting for SSH to be available...
	I0704 00:30:57.874774   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:57.875283   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:57.875308   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:57.875482   69888 main.go:141] libmachine: (auto-676605) DBG | Using SSH client type: external
	I0704 00:30:57.875501   69888 main.go:141] libmachine: (auto-676605) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa (-rw-------)
	I0704 00:30:57.875535   69888 main.go:141] libmachine: (auto-676605) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:30:57.875549   69888 main.go:141] libmachine: (auto-676605) DBG | About to run SSH command:
	I0704 00:30:57.875571   69888 main.go:141] libmachine: (auto-676605) DBG | exit 0
	I0704 00:30:58.004589   69888 main.go:141] libmachine: (auto-676605) DBG | SSH cmd err, output: <nil>: 
	I0704 00:30:58.004894   69888 main.go:141] libmachine: (auto-676605) KVM machine creation complete!
	I0704 00:30:58.005252   69888 main.go:141] libmachine: (auto-676605) Calling .GetConfigRaw
	I0704 00:30:58.005908   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:58.006154   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:58.006318   69888 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:30:58.006329   69888 main.go:141] libmachine: (auto-676605) Calling .GetState
	I0704 00:30:58.007933   69888 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:30:58.007950   69888 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:30:58.007957   69888 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:30:58.007982   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.010522   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.010923   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.010966   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.011136   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.011349   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.011505   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.011655   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.011838   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:58.012058   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:58.012071   69888 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:30:58.123610   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:30:58.123638   69888 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:30:58.123647   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.127061   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.127512   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.127540   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.127714   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.127962   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.128134   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.128404   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.128564   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:58.128818   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:58.128831   69888 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:30:58.244954   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:30:58.245057   69888 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:30:58.245070   69888 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:30:58.245076   69888 main.go:141] libmachine: (auto-676605) Calling .GetMachineName
	I0704 00:30:58.245373   69888 buildroot.go:166] provisioning hostname "auto-676605"
	I0704 00:30:58.245400   69888 main.go:141] libmachine: (auto-676605) Calling .GetMachineName
	I0704 00:30:58.245619   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.248521   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.248904   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.248932   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.249103   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.249304   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.249491   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.249678   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.249859   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:58.250074   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:58.250100   69888 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-676605 && echo "auto-676605" | sudo tee /etc/hostname
	I0704 00:30:58.381109   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-676605
	
	I0704 00:30:58.381132   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.384176   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.384493   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.384522   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.384718   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.384935   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.385123   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.385275   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.385458   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:58.385636   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:58.385657   69888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-676605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-676605/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-676605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:30:58.503960   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:30:58.503995   69888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:30:58.504014   69888 buildroot.go:174] setting up certificates
	I0704 00:30:58.504023   69888 provision.go:84] configureAuth start
	I0704 00:30:58.504030   69888 main.go:141] libmachine: (auto-676605) Calling .GetMachineName
	I0704 00:30:58.504306   69888 main.go:141] libmachine: (auto-676605) Calling .GetIP
	I0704 00:30:58.507414   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.507927   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.507959   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.508304   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.510836   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.511243   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.511265   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.511442   69888 provision.go:143] copyHostCerts
	I0704 00:30:58.511509   69888 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:30:58.511520   69888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:30:58.511584   69888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:30:58.511684   69888 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:30:58.511691   69888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:30:58.511715   69888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:30:58.511778   69888 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:30:58.511786   69888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:30:58.511805   69888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:30:58.512008   69888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.auto-676605 san=[127.0.0.1 192.168.61.17 auto-676605 localhost minikube]
	I0704 00:30:58.558755   69888 provision.go:177] copyRemoteCerts
	I0704 00:30:58.558823   69888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:30:58.558851   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.561990   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.562364   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.562418   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.562702   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.562866   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.563052   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.563207   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:30:58.648292   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:30:58.676225   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0704 00:30:58.703967   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:30:58.731341   69888 provision.go:87] duration metric: took 227.304588ms to configureAuth
	I0704 00:30:58.731379   69888 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:30:58.731556   69888 config.go:182] Loaded profile config "auto-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:30:58.731625   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:58.734145   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.734539   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:58.734565   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:58.734780   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:58.735005   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.735179   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:58.735316   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:58.735513   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:58.735684   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:58.735698   69888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:30:59.025442   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:30:59.025471   69888 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:30:59.025481   69888 main.go:141] libmachine: (auto-676605) Calling .GetURL
	I0704 00:30:59.026891   69888 main.go:141] libmachine: (auto-676605) DBG | Using libvirt version 6000000
	I0704 00:30:59.029218   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.029641   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.029685   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.029847   69888 main.go:141] libmachine: Docker is up and running!
	I0704 00:30:59.029864   69888 main.go:141] libmachine: Reticulating splines...
	I0704 00:30:59.029872   69888 client.go:171] duration metric: took 21.248869985s to LocalClient.Create
	I0704 00:30:59.029902   69888 start.go:167] duration metric: took 21.248938873s to libmachine.API.Create "auto-676605"
	I0704 00:30:59.029913   69888 start.go:293] postStartSetup for "auto-676605" (driver="kvm2")
	I0704 00:30:59.029938   69888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:30:59.029966   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:59.030237   69888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:30:59.030264   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:59.032985   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.033459   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.033498   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.033627   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:59.033842   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:59.034025   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:59.034175   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:30:59.118937   69888 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:30:59.123386   69888 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:30:59.123406   69888 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:30:59.123462   69888 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:30:59.123531   69888 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:30:59.123609   69888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:30:59.133934   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:30:59.161988   69888 start.go:296] duration metric: took 132.060289ms for postStartSetup
	I0704 00:30:59.162049   69888 main.go:141] libmachine: (auto-676605) Calling .GetConfigRaw
	I0704 00:30:59.162783   69888 main.go:141] libmachine: (auto-676605) Calling .GetIP
	I0704 00:30:59.165968   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.166407   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.166447   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.166760   69888 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/config.json ...
	I0704 00:30:59.166964   69888 start.go:128] duration metric: took 21.405655801s to createHost
	I0704 00:30:59.166987   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:59.169844   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.170300   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.170329   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.170466   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:59.170681   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:59.170844   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:59.171114   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:59.171318   69888 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:59.171531   69888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0704 00:30:59.171547   69888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:30:59.280977   69888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720053059.235662365
	
	I0704 00:30:59.281002   69888 fix.go:216] guest clock: 1720053059.235662365
	I0704 00:30:59.281018   69888 fix.go:229] Guest: 2024-07-04 00:30:59.235662365 +0000 UTC Remote: 2024-07-04 00:30:59.166976297 +0000 UTC m=+21.522068788 (delta=68.686068ms)
	I0704 00:30:59.281045   69888 fix.go:200] guest clock delta is within tolerance: 68.686068ms
	I0704 00:30:59.281050   69888 start.go:83] releasing machines lock for "auto-676605", held for 21.519837258s
	I0704 00:30:59.281067   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:59.281382   69888 main.go:141] libmachine: (auto-676605) Calling .GetIP
	I0704 00:30:59.284341   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.284716   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.284745   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.284934   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:59.285473   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:59.285664   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:30:59.285739   69888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:30:59.285780   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:59.285909   69888 ssh_runner.go:195] Run: cat /version.json
	I0704 00:30:59.285931   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:30:59.288411   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.288707   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.288761   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.288790   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.289007   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:59.289287   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:59.289295   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:30:59.289319   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:30:59.289482   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:59.289513   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:30:59.289695   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:30:59.289709   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:30:59.289876   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:30:59.290031   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:30:59.401567   69888 ssh_runner.go:195] Run: systemctl --version
	I0704 00:30:59.411065   69888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:30:59.583459   69888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:30:59.590658   69888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:30:59.590736   69888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:30:59.609289   69888 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:30:59.609311   69888 start.go:494] detecting cgroup driver to use...
	I0704 00:30:59.609390   69888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:30:59.627224   69888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:30:59.644630   69888 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:30:59.644677   69888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:30:59.661926   69888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:30:59.678499   69888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:30:59.820535   69888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:30:59.982133   69888 docker.go:233] disabling docker service ...
	I0704 00:30:59.982207   69888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:30:59.998614   69888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:31:00.013925   69888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:31:00.140030   69888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:31:00.266192   69888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:31:00.282045   69888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:31:00.303713   69888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:31:00.303767   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.316287   69888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:31:00.316354   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.333193   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.349091   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.363556   69888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:31:00.376952   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.390605   69888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.410980   69888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:00.423197   69888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:31:00.435064   69888 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:31:00.435140   69888 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:31:00.451720   69888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:31:00.464116   69888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:31:00.586964   69888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:31:00.753547   69888 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:31:00.753652   69888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:31:00.758869   69888 start.go:562] Will wait 60s for crictl version
	I0704 00:31:00.758921   69888 ssh_runner.go:195] Run: which crictl
	I0704 00:31:00.763432   69888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:31:00.803300   69888 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:31:00.803361   69888 ssh_runner.go:195] Run: crio --version
	I0704 00:31:00.844194   69888 ssh_runner.go:195] Run: crio --version
	I0704 00:31:00.888468   69888 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:31:00.890178   69888 main.go:141] libmachine: (auto-676605) Calling .GetIP
	I0704 00:31:00.893295   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:00.893738   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:31:00.893764   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:00.894064   69888 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:31:00.898896   69888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:31:00.912997   69888 kubeadm.go:877] updating cluster {Name:auto-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:auto-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.17 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:31:00.913110   69888 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:31:00.913167   69888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:31:00.966028   69888 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:31:00.966102   69888 ssh_runner.go:195] Run: which lz4
	I0704 00:31:00.971193   69888 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:31:00.976224   69888 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:31:00.976267   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:31:02.606184   69888 crio.go:462] duration metric: took 1.635087853s to copy over tarball
	I0704 00:31:02.606276   69888 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:30:59.689352   70275 main.go:141] libmachine: (newest-cni-791847) Calling .Start
	I0704 00:30:59.689571   70275 main.go:141] libmachine: (newest-cni-791847) Ensuring networks are active...
	I0704 00:30:59.690409   70275 main.go:141] libmachine: (newest-cni-791847) Ensuring network default is active
	I0704 00:30:59.690803   70275 main.go:141] libmachine: (newest-cni-791847) Ensuring network mk-newest-cni-791847 is active
	I0704 00:30:59.691267   70275 main.go:141] libmachine: (newest-cni-791847) Getting domain xml...
	I0704 00:30:59.692141   70275 main.go:141] libmachine: (newest-cni-791847) Creating domain...
	I0704 00:31:01.011148   70275 main.go:141] libmachine: (newest-cni-791847) Waiting to get IP...
	I0704 00:31:01.012053   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:01.012625   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:01.012720   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:01.012601   70310 retry.go:31] will retry after 311.800218ms: waiting for machine to come up
	I0704 00:31:01.326334   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:01.326896   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:01.326926   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:01.326832   70310 retry.go:31] will retry after 271.161723ms: waiting for machine to come up
	I0704 00:31:01.599223   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:01.600555   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:01.600606   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:01.600516   70310 retry.go:31] will retry after 298.805071ms: waiting for machine to come up
	I0704 00:31:01.901353   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:01.901900   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:01.901943   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:01.901865   70310 retry.go:31] will retry after 408.553692ms: waiting for machine to come up
	I0704 00:31:02.312759   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:02.313388   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:02.313414   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:02.313294   70310 retry.go:31] will retry after 547.88174ms: waiting for machine to come up
	I0704 00:31:02.863034   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:02.863561   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:02.863591   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:02.863499   70310 retry.go:31] will retry after 802.198431ms: waiting for machine to come up
	I0704 00:31:03.666919   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:03.667395   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:03.667433   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:03.667334   70310 retry.go:31] will retry after 1.137554301s: waiting for machine to come up
	I0704 00:31:05.312245   69888 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.705930009s)
	I0704 00:31:05.312273   69888 crio.go:469] duration metric: took 2.70605303s to extract the tarball
	I0704 00:31:05.312306   69888 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:31:05.353216   69888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:31:05.398225   69888 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:31:05.398250   69888 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:31:05.398258   69888 kubeadm.go:928] updating node { 192.168.61.17 8443 v1.30.2 crio true true} ...
	I0704 00:31:05.398351   69888 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-676605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:auto-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:31:05.398449   69888 ssh_runner.go:195] Run: crio config
	I0704 00:31:05.451176   69888 cni.go:84] Creating CNI manager for ""
	I0704 00:31:05.451206   69888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:31:05.451227   69888 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:31:05.451253   69888 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.17 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-676605 NodeName:auto-676605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:31:05.451438   69888 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-676605"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:31:05.451527   69888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:31:05.462429   69888 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:31:05.462529   69888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:31:05.473249   69888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0704 00:31:05.492946   69888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:31:05.513878   69888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0704 00:31:05.532943   69888 ssh_runner.go:195] Run: grep 192.168.61.17	control-plane.minikube.internal$ /etc/hosts
	I0704 00:31:05.537273   69888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:31:05.551164   69888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:31:05.674560   69888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:31:05.700548   69888 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605 for IP: 192.168.61.17
	I0704 00:31:05.700575   69888 certs.go:194] generating shared ca certs ...
	I0704 00:31:05.700598   69888 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:05.700761   69888 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:31:05.700814   69888 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:31:05.700827   69888 certs.go:256] generating profile certs ...
	I0704 00:31:05.700894   69888 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.key
	I0704 00:31:05.700911   69888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.crt with IP's: []
	I0704 00:31:05.790468   69888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.crt ...
	I0704 00:31:05.790502   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.crt: {Name:mk983dedb4210155a5a1646e901d5259d77e1dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:05.790712   69888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.key ...
	I0704 00:31:05.790732   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/client.key: {Name:mk64acdfb4d09fdf99e9fd01a227f6bf229e2234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:05.790853   69888 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key.c5bf15b8
	I0704 00:31:05.790876   69888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt.c5bf15b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.17]
	I0704 00:31:05.949998   69888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt.c5bf15b8 ...
	I0704 00:31:05.950028   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt.c5bf15b8: {Name:mk9b2894049576485f202a140025ed96aab88841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:05.950188   69888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key.c5bf15b8 ...
	I0704 00:31:05.950200   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key.c5bf15b8: {Name:mka6fddea7af1b78826a97a988730740f7fa5f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:05.950296   69888 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt.c5bf15b8 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt
	I0704 00:31:05.950386   69888 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key.c5bf15b8 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key
	I0704 00:31:05.950442   69888 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.key
	I0704 00:31:05.950456   69888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.crt with IP's: []
	I0704 00:31:06.049542   69888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.crt ...
	I0704 00:31:06.049573   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.crt: {Name:mk215efbbb74820711e6cbf08be4dfae7a6d35ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:06.049756   69888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.key ...
	I0704 00:31:06.049773   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.key: {Name:mk38a364260218cf517e4edc51086937bfdb2627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:06.049968   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:31:06.050017   69888 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:31:06.050030   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:31:06.050058   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:31:06.050079   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:31:06.050099   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:31:06.050135   69888 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:31:06.050699   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:31:06.085661   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:31:06.119061   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:31:06.151709   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:31:06.181123   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0704 00:31:06.209508   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:31:06.241199   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:31:06.276034   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/auto-676605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:31:06.306920   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:31:06.338135   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:31:06.368625   69888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:31:06.402432   69888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:31:06.444793   69888 ssh_runner.go:195] Run: openssl version
	I0704 00:31:06.453883   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:31:06.467291   69888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:31:06.472355   69888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:31:06.472428   69888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:31:06.481109   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:31:06.493458   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:31:06.510009   69888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:06.515434   69888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:06.515497   69888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:06.522147   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:31:06.535062   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:31:06.551169   69888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:31:06.557556   69888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:31:06.557641   69888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:31:06.564897   69888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:31:06.577489   69888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:31:06.583116   69888 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:31:06.583182   69888 kubeadm.go:391] StartCluster: {Name:auto-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:auto-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.17 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:31:06.583275   69888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:31:06.583351   69888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:31:06.627734   69888 cri.go:89] found id: ""
	I0704 00:31:06.627797   69888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:31:06.639559   69888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:31:06.651474   69888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:31:06.661778   69888 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:31:06.661799   69888 kubeadm.go:156] found existing configuration files:
	
	I0704 00:31:06.661839   69888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:31:06.671386   69888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:31:06.671448   69888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:31:06.681985   69888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:31:06.692176   69888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:31:06.692243   69888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:31:06.702914   69888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:31:06.713363   69888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:31:06.713422   69888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:31:06.725335   69888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:31:06.738449   69888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:31:06.738521   69888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:31:06.749728   69888 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:31:06.817436   69888 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:31:06.817600   69888 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:31:06.963333   69888 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:31:06.963481   69888 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:31:06.963609   69888 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:31:07.209415   69888 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:31:07.341101   69888 out.go:204]   - Generating certificates and keys ...
	I0704 00:31:07.341240   69888 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:31:07.341340   69888 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:31:07.341473   69888 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0704 00:31:07.509363   69888 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0704 00:31:04.806521   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:04.807058   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:04.807088   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:04.807010   70310 retry.go:31] will retry after 1.179627923s: waiting for machine to come up
	I0704 00:31:05.988736   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:05.989169   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:05.989199   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:05.989139   70310 retry.go:31] will retry after 1.793615921s: waiting for machine to come up
	I0704 00:31:07.785176   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:07.785891   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:07.785925   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:07.785835   70310 retry.go:31] will retry after 1.74182837s: waiting for machine to come up
	I0704 00:31:09.529431   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:09.529892   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:09.529919   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:09.529851   70310 retry.go:31] will retry after 2.877600799s: waiting for machine to come up
	I0704 00:31:07.838269   69888 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0704 00:31:07.953384   69888 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0704 00:31:08.077263   69888 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0704 00:31:08.077418   69888 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-676605 localhost] and IPs [192.168.61.17 127.0.0.1 ::1]
	I0704 00:31:08.328938   69888 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0704 00:31:08.329308   69888 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-676605 localhost] and IPs [192.168.61.17 127.0.0.1 ::1]
	I0704 00:31:08.412311   69888 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0704 00:31:08.577862   69888 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0704 00:31:09.218654   69888 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0704 00:31:09.218871   69888 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:31:09.366168   69888 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:31:09.709077   69888 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:31:09.834179   69888 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:31:10.190632   69888 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:31:10.464551   69888 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:31:10.465404   69888 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:31:10.467973   69888 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:31:10.532731   69888 out.go:204]   - Booting up control plane ...
	I0704 00:31:10.532897   69888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:31:10.533033   69888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:31:10.533143   69888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:31:10.533308   69888 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:31:10.533466   69888 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:31:10.533539   69888 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:31:10.653009   69888 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:31:10.653137   69888 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:31:11.654594   69888 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001478346s
	I0704 00:31:11.654699   69888 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:31:12.409295   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:12.409953   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:12.409982   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:12.409912   70310 retry.go:31] will retry after 3.62722171s: waiting for machine to come up
	I0704 00:31:16.657121   69888 kubeadm.go:309] [api-check] The API server is healthy after 5.002172664s
	I0704 00:31:16.670221   69888 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:31:16.688738   69888 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:31:16.725775   69888 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:31:16.726013   69888 kubeadm.go:309] [mark-control-plane] Marking the node auto-676605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:31:16.748960   69888 kubeadm.go:309] [bootstrap-token] Using token: g192ym.4ga9lygusnsoo11n
	I0704 00:31:16.750612   69888 out.go:204]   - Configuring RBAC rules ...
	I0704 00:31:16.750757   69888 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:31:16.772106   69888 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:31:16.788416   69888 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:31:16.794293   69888 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:31:16.799686   69888 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:31:16.807527   69888 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:31:17.068418   69888 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:31:17.521408   69888 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:31:18.068270   69888 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:31:18.068294   69888 kubeadm.go:309] 
	I0704 00:31:18.068373   69888 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:31:18.068382   69888 kubeadm.go:309] 
	I0704 00:31:18.068467   69888 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:31:18.068476   69888 kubeadm.go:309] 
	I0704 00:31:18.068544   69888 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:31:18.068632   69888 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:31:18.068707   69888 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:31:18.068715   69888 kubeadm.go:309] 
	I0704 00:31:18.068793   69888 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:31:18.068802   69888 kubeadm.go:309] 
	I0704 00:31:18.068870   69888 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:31:18.068879   69888 kubeadm.go:309] 
	I0704 00:31:18.068955   69888 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:31:18.069035   69888 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:31:18.069093   69888 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:31:18.069099   69888 kubeadm.go:309] 
	I0704 00:31:18.069180   69888 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:31:18.069268   69888 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:31:18.069283   69888 kubeadm.go:309] 
	I0704 00:31:18.069366   69888 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token g192ym.4ga9lygusnsoo11n \
	I0704 00:31:18.069508   69888 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:31:18.069533   69888 kubeadm.go:309] 	--control-plane 
	I0704 00:31:18.069555   69888 kubeadm.go:309] 
	I0704 00:31:18.069668   69888 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:31:18.069677   69888 kubeadm.go:309] 
	I0704 00:31:18.069782   69888 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token g192ym.4ga9lygusnsoo11n \
	I0704 00:31:18.069933   69888 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:31:18.070314   69888 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:31:18.070334   69888 cni.go:84] Creating CNI manager for ""
	I0704 00:31:18.070343   69888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:31:18.072109   69888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:31:16.038633   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:16.039158   70275 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:31:16.039191   70275 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:31:16.039101   70310 retry.go:31] will retry after 3.27215705s: waiting for machine to come up
	I0704 00:31:19.313011   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.313546   70275 main.go:141] libmachine: (newest-cni-791847) Found IP for machine: 192.168.72.71
	I0704 00:31:19.313572   70275 main.go:141] libmachine: (newest-cni-791847) Reserving static IP address...
	I0704 00:31:19.313587   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has current primary IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.314057   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "newest-cni-791847", mac: "52:54:00:85:d7:95", ip: "192.168.72.71"} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.314102   70275 main.go:141] libmachine: (newest-cni-791847) DBG | skip adding static IP to network mk-newest-cni-791847 - found existing host DHCP lease matching {name: "newest-cni-791847", mac: "52:54:00:85:d7:95", ip: "192.168.72.71"}
	I0704 00:31:19.314116   70275 main.go:141] libmachine: (newest-cni-791847) Reserved static IP address: 192.168.72.71
	I0704 00:31:19.314131   70275 main.go:141] libmachine: (newest-cni-791847) Waiting for SSH to be available...
	I0704 00:31:19.314148   70275 main.go:141] libmachine: (newest-cni-791847) DBG | Getting to WaitForSSH function...
	I0704 00:31:19.316995   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.317464   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.317508   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.317635   70275 main.go:141] libmachine: (newest-cni-791847) DBG | Using SSH client type: external
	I0704 00:31:19.317662   70275 main.go:141] libmachine: (newest-cni-791847) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa (-rw-------)
	I0704 00:31:19.317689   70275 main.go:141] libmachine: (newest-cni-791847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:31:19.317703   70275 main.go:141] libmachine: (newest-cni-791847) DBG | About to run SSH command:
	I0704 00:31:19.317712   70275 main.go:141] libmachine: (newest-cni-791847) DBG | exit 0
	I0704 00:31:19.452121   70275 main.go:141] libmachine: (newest-cni-791847) DBG | SSH cmd err, output: <nil>: 
	I0704 00:31:19.452477   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetConfigRaw
	I0704 00:31:19.453170   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:31:19.456034   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.456439   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.456466   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.456742   70275 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/config.json ...
	I0704 00:31:19.457016   70275 machine.go:94] provisionDockerMachine start ...
	I0704 00:31:19.457040   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:19.457335   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:19.460002   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.460344   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.460398   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.460515   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:19.460665   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.460884   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.461019   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:19.461164   70275 main.go:141] libmachine: Using SSH client type: native
	I0704 00:31:19.461342   70275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:31:19.461353   70275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:31:18.073364   69888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:31:18.085749   69888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:31:18.112564   69888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:31:18.112609   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:18.112653   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-676605 minikube.k8s.io/updated_at=2024_07_04T00_31_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=auto-676605 minikube.k8s.io/primary=true
	I0704 00:31:18.285401   69888 ops.go:34] apiserver oom_adj: -16
	I0704 00:31:18.285557   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:18.785611   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:19.285893   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:19.786123   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:20.286463   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:20.786172   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:21.285980   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:21.786442   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:22.286269   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:19.580418   70275 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:31:19.580451   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:31:19.580698   70275 buildroot.go:166] provisioning hostname "newest-cni-791847"
	I0704 00:31:19.580730   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:31:19.580951   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:19.583611   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.584067   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.584096   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.584219   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:19.584403   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.584568   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.584704   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:19.584876   70275 main.go:141] libmachine: Using SSH client type: native
	I0704 00:31:19.585092   70275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:31:19.585112   70275 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-791847 && echo "newest-cni-791847" | sudo tee /etc/hostname
	I0704 00:31:19.715674   70275 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-791847
	
	I0704 00:31:19.715698   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:19.718773   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.719128   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.719175   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.719357   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:19.719569   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.719801   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:19.719990   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:19.720206   70275 main.go:141] libmachine: Using SSH client type: native
	I0704 00:31:19.720376   70275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:31:19.720391   70275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-791847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-791847/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-791847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:31:19.847096   70275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:31:19.847186   70275 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:31:19.847267   70275 buildroot.go:174] setting up certificates
	I0704 00:31:19.847281   70275 provision.go:84] configureAuth start
	I0704 00:31:19.847298   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:31:19.847602   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:31:19.851205   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.851646   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.851676   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.851908   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:19.855000   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.855461   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:19.855489   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:19.855705   70275 provision.go:143] copyHostCerts
	I0704 00:31:19.855768   70275 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:31:19.855780   70275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:31:19.855849   70275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:31:19.856019   70275 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:31:19.856033   70275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:31:19.856063   70275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:31:19.856156   70275 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:31:19.856167   70275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:31:19.856193   70275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:31:19.856270   70275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.newest-cni-791847 san=[127.0.0.1 192.168.72.71 localhost minikube newest-cni-791847]
	I0704 00:31:20.350502   70275 provision.go:177] copyRemoteCerts
	I0704 00:31:20.350594   70275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:31:20.350630   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:20.353637   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.354065   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:20.354100   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.354347   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:20.354576   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.354735   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:20.354867   70275 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:31:20.448204   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:31:20.477084   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:31:20.506633   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:31:20.535627   70275 provision.go:87] duration metric: took 688.33248ms to configureAuth
	I0704 00:31:20.535654   70275 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:31:20.535871   70275 config.go:182] Loaded profile config "newest-cni-791847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:31:20.536001   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:20.538928   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.539378   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:20.539406   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.539591   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:20.539825   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.540002   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.540130   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:20.540266   70275 main.go:141] libmachine: Using SSH client type: native
	I0704 00:31:20.540410   70275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:31:20.540425   70275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:31:20.827995   70275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:31:20.828022   70275 machine.go:97] duration metric: took 1.370990148s to provisionDockerMachine
	I0704 00:31:20.828035   70275 start.go:293] postStartSetup for "newest-cni-791847" (driver="kvm2")
	I0704 00:31:20.828046   70275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:31:20.828065   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:20.828374   70275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:31:20.828396   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:20.831534   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.832025   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:20.832056   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.832332   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:20.832548   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.832731   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:20.832878   70275 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:31:20.924415   70275 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:31:20.929366   70275 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:31:20.929397   70275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:31:20.929476   70275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:31:20.929573   70275 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:31:20.929664   70275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:31:20.942083   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:31:20.971048   70275 start.go:296] duration metric: took 142.997686ms for postStartSetup
	I0704 00:31:20.971090   70275 fix.go:56] duration metric: took 21.305539176s for fixHost
	I0704 00:31:20.971116   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:20.973581   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.973924   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:20.973960   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:20.974126   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:20.974359   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.974548   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:20.974702   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:20.974856   70275 main.go:141] libmachine: Using SSH client type: native
	I0704 00:31:20.975010   70275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:31:20.975019   70275 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:31:21.093428   70275 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720053081.054803608
	
	I0704 00:31:21.093454   70275 fix.go:216] guest clock: 1720053081.054803608
	I0704 00:31:21.093462   70275 fix.go:229] Guest: 2024-07-04 00:31:21.054803608 +0000 UTC Remote: 2024-07-04 00:31:20.971095802 +0000 UTC m=+21.456973303 (delta=83.707806ms)
	I0704 00:31:21.093487   70275 fix.go:200] guest clock delta is within tolerance: 83.707806ms
	I0704 00:31:21.093493   70275 start.go:83] releasing machines lock for "newest-cni-791847", held for 21.427958397s
	I0704 00:31:21.093519   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:21.093785   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:31:21.096631   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.097064   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:21.097092   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.097248   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:21.097812   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:21.098001   70275 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:31:21.098085   70275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:31:21.098123   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:21.098216   70275 ssh_runner.go:195] Run: cat /version.json
	I0704 00:31:21.098245   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:31:21.101057   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.101363   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.101411   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:21.101435   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.101611   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:21.101817   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:21.101811   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:21.101856   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:21.101974   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:31:21.102074   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:21.102187   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:31:21.102229   70275 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:31:21.102382   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:31:21.102534   70275 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:31:21.185528   70275 ssh_runner.go:195] Run: systemctl --version
	I0704 00:31:21.209518   70275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:31:21.355591   70275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:31:21.363727   70275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:31:21.363815   70275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:31:21.383091   70275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:31:21.383119   70275 start.go:494] detecting cgroup driver to use...
	I0704 00:31:21.383209   70275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:31:21.400822   70275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:31:21.416323   70275 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:31:21.416395   70275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:31:21.431627   70275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:31:21.447408   70275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:31:21.575766   70275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:31:21.767058   70275 docker.go:233] disabling docker service ...
	I0704 00:31:21.767127   70275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:31:21.782486   70275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:31:21.797790   70275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:31:21.941246   70275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:31:22.072867   70275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:31:22.088418   70275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:31:22.110364   70275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:31:22.110487   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.122869   70275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:31:22.122926   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.135933   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.149187   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.161173   70275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:31:22.172976   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.183991   70275 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.203338   70275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:31:22.215057   70275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:31:22.226278   70275 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:31:22.226396   70275 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:31:22.242492   70275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:31:22.252962   70275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:31:22.404567   70275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:31:22.575957   70275 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:31:22.576034   70275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:31:22.581466   70275 start.go:562] Will wait 60s for crictl version
	I0704 00:31:22.581526   70275 ssh_runner.go:195] Run: which crictl
	I0704 00:31:22.585984   70275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:31:22.636123   70275 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:31:22.636213   70275 ssh_runner.go:195] Run: crio --version
	I0704 00:31:22.670349   70275 ssh_runner.go:195] Run: crio --version
	I0704 00:31:22.703806   70275 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:31:22.704948   70275 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:31:22.708064   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:22.708385   70275 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:31:11 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:31:22.708435   70275 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:31:22.708627   70275 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:31:22.713392   70275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:31:22.728321   70275 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0704 00:31:22.729528   70275 kubeadm.go:877] updating cluster {Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:31:22.729649   70275 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:31:22.729705   70275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:31:22.771263   70275 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:31:22.771346   70275 ssh_runner.go:195] Run: which lz4
	I0704 00:31:22.775867   70275 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:31:22.781051   70275 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:31:22.781094   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:31:24.407510   70275 crio.go:462] duration metric: took 1.631683869s to copy over tarball
	I0704 00:31:24.407571   70275 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:31:22.786298   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:23.285872   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:23.786082   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:24.286192   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:24.785805   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:25.286079   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:25.785885   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:26.285656   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:26.786365   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:27.285945   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:26.866766   70275 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.459168485s)
	I0704 00:31:26.866795   70275 crio.go:469] duration metric: took 2.459263044s to extract the tarball
	I0704 00:31:26.866804   70275 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:31:26.907431   70275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:31:26.951970   70275 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:31:26.952000   70275 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:31:26.952009   70275 kubeadm.go:928] updating node { 192.168.72.71 8443 v1.30.2 crio true true} ...
	I0704 00:31:26.952140   70275 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-791847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:31:26.952233   70275 ssh_runner.go:195] Run: crio config
	I0704 00:31:27.002004   70275 cni.go:84] Creating CNI manager for ""
	I0704 00:31:27.002033   70275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:31:27.002042   70275 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0704 00:31:27.002066   70275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.71 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-791847 NodeName:newest-cni-791847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:31:27.002234   70275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-791847"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:31:27.002304   70275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:31:27.014007   70275 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:31:27.014085   70275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:31:27.025084   70275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0704 00:31:27.043308   70275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:31:27.063106   70275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0704 00:31:27.082724   70275 ssh_runner.go:195] Run: grep 192.168.72.71	control-plane.minikube.internal$ /etc/hosts
	I0704 00:31:27.087006   70275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:31:27.102729   70275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:31:27.236567   70275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:31:27.255499   70275 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847 for IP: 192.168.72.71
	I0704 00:31:27.255527   70275 certs.go:194] generating shared ca certs ...
	I0704 00:31:27.255544   70275 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:27.255712   70275 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:31:27.255769   70275 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:31:27.255785   70275 certs.go:256] generating profile certs ...
	I0704 00:31:27.255912   70275 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.key
	I0704 00:31:27.255983   70275 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key.eb601083
	I0704 00:31:27.256037   70275 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key
	I0704 00:31:27.256180   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:31:27.256225   70275 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:31:27.256238   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:31:27.256269   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:31:27.256298   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:31:27.256329   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:31:27.256388   70275 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:31:27.257278   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:31:27.305650   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:31:27.340414   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:31:27.377253   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:31:27.420384   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:31:27.454047   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:31:27.484335   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:31:27.516454   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:31:27.554629   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:31:27.586351   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:31:27.615613   70275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:31:27.643032   70275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:31:27.662666   70275 ssh_runner.go:195] Run: openssl version
	I0704 00:31:27.669321   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:31:27.684464   70275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:27.690099   70275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:27.690168   70275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:31:27.697606   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:31:27.712277   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:31:27.726553   70275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:31:27.731816   70275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:31:27.731912   70275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:31:27.738766   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:31:27.752961   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:31:27.766617   70275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:31:27.771993   70275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:31:27.772053   70275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:31:27.778615   70275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:31:27.793374   70275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:31:27.799643   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:31:27.806859   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:31:27.814277   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:31:27.822116   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:31:27.829233   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:31:27.836781   70275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:31:27.843940   70275 kubeadm.go:391] StartCluster: {Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:31:27.844066   70275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:31:27.844181   70275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:31:27.890885   70275 cri.go:89] found id: ""
	I0704 00:31:27.890966   70275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:31:27.904765   70275 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:31:27.904790   70275 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:31:27.904796   70275 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:31:27.904842   70275 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:31:27.917665   70275 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:31:27.980312   70275 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-791847" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:31:27.981162   70275 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-791847" cluster setting kubeconfig missing "newest-cni-791847" context setting]
	I0704 00:31:27.982459   70275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:28.075296   70275 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:31:28.091013   70275 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.71
	I0704 00:31:28.091061   70275 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:31:28.091076   70275 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:31:28.091135   70275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:31:28.149785   70275 cri.go:89] found id: ""
	I0704 00:31:28.149852   70275 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:31:28.180291   70275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:31:28.193693   70275 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:31:28.193727   70275 kubeadm.go:156] found existing configuration files:
	
	I0704 00:31:28.193786   70275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:31:28.206369   70275 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:31:28.206466   70275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:31:28.218077   70275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:31:28.229894   70275 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:31:28.229960   70275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:31:28.241481   70275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:31:28.252386   70275 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:31:28.252453   70275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:31:28.264111   70275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:31:28.275144   70275 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:31:28.275221   70275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:31:28.287043   70275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:31:28.299096   70275 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:31:28.444638   70275 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:31:29.358663   70275 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:31:27.786523   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:28.329021   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:28.973805   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:29.286617   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:29.785876   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:30.286236   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:30.786161   69888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:31:31.059422   69888 kubeadm.go:1107] duration metric: took 12.946861808s to wait for elevateKubeSystemPrivileges
	W0704 00:31:31.059465   69888 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:31:31.059475   69888 kubeadm.go:393] duration metric: took 24.476297609s to StartCluster
	I0704 00:31:31.059495   69888 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:31.059569   69888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:31:31.062089   69888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:31.062392   69888 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.17 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:31:31.062524   69888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0704 00:31:31.062550   69888 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:31:31.062651   69888 addons.go:69] Setting default-storageclass=true in profile "auto-676605"
	I0704 00:31:31.062693   69888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-676605"
	I0704 00:31:31.062751   69888 config.go:182] Loaded profile config "auto-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:31:31.062651   69888 addons.go:69] Setting storage-provisioner=true in profile "auto-676605"
	I0704 00:31:31.062912   69888 addons.go:234] Setting addon storage-provisioner=true in "auto-676605"
	I0704 00:31:31.062944   69888 host.go:66] Checking if "auto-676605" exists ...
	I0704 00:31:31.063125   69888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:31:31.063144   69888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:31:31.063346   69888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:31:31.063368   69888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:31:31.064344   69888 out.go:177] * Verifying Kubernetes components...
	I0704 00:31:31.065766   69888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:31:31.083106   69888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0704 00:31:31.083151   69888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I0704 00:31:31.083656   69888 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:31:31.083686   69888 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:31:31.084205   69888 main.go:141] libmachine: Using API Version  1
	I0704 00:31:31.084230   69888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:31:31.084323   69888 main.go:141] libmachine: Using API Version  1
	I0704 00:31:31.084341   69888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:31:31.084646   69888 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:31:31.084758   69888 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:31:31.084957   69888 main.go:141] libmachine: (auto-676605) Calling .GetState
	I0704 00:31:31.085410   69888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:31:31.085452   69888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:31:31.089960   69888 addons.go:234] Setting addon default-storageclass=true in "auto-676605"
	I0704 00:31:31.090006   69888 host.go:66] Checking if "auto-676605" exists ...
	I0704 00:31:31.090389   69888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:31:31.090428   69888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:31:31.107461   69888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0704 00:31:31.108131   69888 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:31:31.110238   69888 main.go:141] libmachine: Using API Version  1
	I0704 00:31:31.110263   69888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:31:31.110408   69888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0704 00:31:31.110960   69888 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:31:31.110974   69888 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:31:31.111207   69888 main.go:141] libmachine: (auto-676605) Calling .GetState
	I0704 00:31:31.111611   69888 main.go:141] libmachine: Using API Version  1
	I0704 00:31:31.111636   69888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:31:31.112123   69888 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:31:31.112757   69888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:31:31.112784   69888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:31:31.113391   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:31:31.115800   69888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:31:31.117739   69888 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:31:31.117764   69888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:31:31.117788   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:31:31.121441   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:31.122013   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:31:31.122050   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:31.122430   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:31:31.122668   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:31:31.122855   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:31:31.123039   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:31:31.136233   69888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0704 00:31:31.136799   69888 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:31:31.137519   69888 main.go:141] libmachine: Using API Version  1
	I0704 00:31:31.137547   69888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:31:31.137936   69888 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:31:31.138142   69888 main.go:141] libmachine: (auto-676605) Calling .GetState
	I0704 00:31:31.140171   69888 main.go:141] libmachine: (auto-676605) Calling .DriverName
	I0704 00:31:31.140581   69888 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:31:31.140595   69888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:31:31.140616   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHHostname
	I0704 00:31:31.143979   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:31.144482   69888 main.go:141] libmachine: (auto-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ba:b5", ip: ""} in network mk-auto-676605: {Iface:virbr3 ExpiryTime:2024-07-04 01:30:52 +0000 UTC Type:0 Mac:52:54:00:d6:ba:b5 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:auto-676605 Clientid:01:52:54:00:d6:ba:b5}
	I0704 00:31:31.144506   69888 main.go:141] libmachine: (auto-676605) DBG | domain auto-676605 has defined IP address 192.168.61.17 and MAC address 52:54:00:d6:ba:b5 in network mk-auto-676605
	I0704 00:31:31.144851   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHPort
	I0704 00:31:31.145054   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHKeyPath
	I0704 00:31:31.145240   69888 main.go:141] libmachine: (auto-676605) Calling .GetSSHUsername
	I0704 00:31:31.145488   69888 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/auto-676605/id_rsa Username:docker}
	I0704 00:31:31.377686   69888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:31:31.377976   69888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0704 00:31:31.410384   69888 node_ready.go:35] waiting up to 15m0s for node "auto-676605" to be "Ready" ...
	I0704 00:31:31.419965   69888 node_ready.go:49] node "auto-676605" has status "Ready":"True"
	I0704 00:31:31.419992   69888 node_ready.go:38] duration metric: took 9.577681ms for node "auto-676605" to be "Ready" ...
	I0704 00:31:31.420004   69888 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:31:31.428842   69888 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace to be "Ready" ...
	I0704 00:31:31.490069   69888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:31:31.511712   69888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:31:31.863871   69888 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0704 00:31:32.372682   69888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-676605" context rescaled to 1 replicas
	I0704 00:31:32.524431   69888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.01267684s)
	I0704 00:31:32.524493   69888 main.go:141] libmachine: Making call to close driver server
	I0704 00:31:32.524519   69888 main.go:141] libmachine: (auto-676605) Calling .Close
	I0704 00:31:32.524779   69888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.034669211s)
	I0704 00:31:32.524830   69888 main.go:141] libmachine: Making call to close driver server
	I0704 00:31:32.524841   69888 main.go:141] libmachine: (auto-676605) Calling .Close
	I0704 00:31:32.524979   69888 main.go:141] libmachine: (auto-676605) DBG | Closing plugin on server side
	I0704 00:31:32.524902   69888 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:31:32.525025   69888 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:31:32.525093   69888 main.go:141] libmachine: (auto-676605) DBG | Closing plugin on server side
	I0704 00:31:32.525036   69888 main.go:141] libmachine: Making call to close driver server
	I0704 00:31:32.525140   69888 main.go:141] libmachine: (auto-676605) Calling .Close
	I0704 00:31:32.525349   69888 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:31:32.525362   69888 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:31:32.526454   69888 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:31:32.526472   69888 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:31:32.526481   69888 main.go:141] libmachine: Making call to close driver server
	I0704 00:31:32.526504   69888 main.go:141] libmachine: (auto-676605) Calling .Close
	I0704 00:31:32.526814   69888 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:31:32.526859   69888 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:31:32.585628   69888 main.go:141] libmachine: Making call to close driver server
	I0704 00:31:32.585658   69888 main.go:141] libmachine: (auto-676605) Calling .Close
	I0704 00:31:32.586017   69888 main.go:141] libmachine: (auto-676605) DBG | Closing plugin on server side
	I0704 00:31:32.586054   69888 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:31:32.586067   69888 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:31:32.589966   69888 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0704 00:31:32.592938   69888 addons.go:510] duration metric: took 1.530390153s for enable addons: enabled=[storage-provisioner default-storageclass]
	
	
	==> CRI-O <==
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.795802739Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b275cdf-2c62-4413-a2c3-334bc2dde01f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.796112499Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1dd8821ccd1c25d0ec9a7c91778e5c418984bee5b26628862eb45ec7b36f93ec,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-jpmsg,Uid:e2561edc-d580-461c-acae-218e6b7a2f67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051839807307024,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-jpmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2561edc-d580-461c-acae-218e6b7a2f67,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:10:31.751994777Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2bn7d,Uid:e6d756a8-df4e-414b-b44c-32fb728c6
feb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051839616177587,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:10:31.751980618Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051839615094555,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:10:31.
751990240Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1ac6edec-3e4e-42bd-8848-1388594611e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051832066980254,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-04T00:10:31.751989110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&PodSandboxMetadata{Name:kube-proxy-9phtm,Uid:6b5a4c0e-632d-4c1c-bfa7-f53448618efb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051832065486600,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-bfa7-f53448618efb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-07-04T00:10:31.751992861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-687975,Uid:ed9b1ee5323bfe1840da003810cc9d9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051828240716338,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.213:2379,kubernetes.io/config.hash: ed9b1ee5323bfe1840da003810cc9d9c,kubernetes.io/config.seen: 2024-07-04T00:10:27.844587453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-687
975,Uid:5de83de9c9b65f4bd1f185efdb900cd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051828234573294,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.213:8443,kubernetes.io/config.hash: 5de83de9c9b65f4bd1f185efdb900cd8,kubernetes.io/config.seen: 2024-07-04T00:10:27.734258570Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-687975,Uid:c4dc13dfdc5f0b7be029f80782c2101d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051828223700882,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c4dc13dfdc5f0b7be029f80782c2101d,kubernetes.io/config.seen: 2024-07-04T00:10:27.734264253Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-687975,Uid:a6a5003154c341c16886b3b155673039,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051828215586074,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a6a5003154c341c16886b3b155
673039,kubernetes.io/config.seen: 2024-07-04T00:10:27.734263095Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5b275cdf-2c62-4413-a2c3-334bc2dde01f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798105882Z" level=debug msg="Request: &PullImageRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:10:31.751994777Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Auth:nil,SandboxConfig:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-jpmsg,Uid:e2561edc-d580-461c-acae-218e6b7a2f67,Namespace:kube-system,Attempt:0,},Hostname:metrics-server-569cc877fc-jpmsg,LogDirectory:/var/log/pods/kube-system_metrics-server-569cc877fc-jpmsg_e2561edc-d580-461c-acae-218e6b7a2f67,DnsConfig:&DNSConfig{Servers:[10.96.0.10],Searches:[kube-system.svc.cluster.local svc.cluster.local cluster.local],Options:[ndots:5],},PortMappings:[]*PortMapping{&PortMapping{Protocol:TCP,ContainerPort:4443,HostPort:0,HostIp:,},},Labels:map[string]string{io.kubernetes.pod.name: metrics-server-569cc877fc-jpmsg,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: e2561edc-d580-461c-acae-218e6b7a2f67,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:10:31.751994777Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pode2561edc-d580-461c-acae-218e6b7a2f67,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,Seccomp:&SecurityProfile{ProfileType:RuntimeDefault,LocalhostRef:,},Apparmor:nil,},Sysctls:map[string]string{},Overhead:&LinuxContainerResources{CpuPeriod:0,CpuQuota:0,CpuShares:0,MemoryLimitInBytes:0,OomScoreAdj:0,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Resources:&LinuxContainerResources{CpuPeriod:100000,CpuQuota
:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:0,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{},Unified:map[string]string{memory.oom.group: 1,},MemorySwapLimitInBytes:0,},},Windows:nil,},}" file="otel-collector/interceptors.go:62" id=e73f1baa-0244-4fc1-b9e6-fa7bcb018e0a name=/runtime.v1.ImageService/PullImage
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798215653Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_pull.go:41" id=e73f1baa-0244-4fc1-b9e6-fa7bcb018e0a name=/runtime.v1.ImageService/PullImage
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798302102Z" level=debug msg="Using pull policy path for image fake.domain/registry.k8s.io/echoserver:1.4: " file="server/image_pull.go:150" id=e73f1baa-0244-4fc1-b9e6-fa7bcb018e0a name=/runtime.v1.ImageService/PullImage
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798343806Z" level=debug msg="Skipping non-existing decryption_keys_path: /etc/crio/keys/" file="server/utils.go:89"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798853539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e39d7b66-562a-4158-adc7-27dd4987a1bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.798927458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e39d7b66-562a-4158-adc7-27dd4987a1bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.799297294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f,PodSandboxId:b368293f3bee8b7fb63c1e3197aefe68e1256dc15972686a15d1323e45192c99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051863005696629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac6edec-3e4e-42bd-8848-1388594611e1,},Annotations:map[string]string{io.kubernetes.container.hash: 22c45ef9,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56bf46a96bbb2f1a3de7cf20da1dae9b715251d6673109d1a9f0f11ae81cc5f6,PodSandboxId:06e56aaafbfc94d4741d164602e4acb4b5961cc626748db0be099fab64defd3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051843131205792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8cfec9ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3,PodSandboxId:e7874c8a943a56be2e2374a1bbac2572afacbb4045e481348c4633e8b99a7f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051840236005665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2bn7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d756a8-df4e-414b-b44c-32fb728c6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 70139600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78,PodSandboxId:739523f0f3056b9e830be25f5605bf5218ec42cd2963f90c0be45944ada73a66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051832197738948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9phtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5a4c0e-632d-4c1c-b
fa7-f53448618efb,},Annotations:map[string]string{io.kubernetes.container.hash: e2f0725b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864,PodSandboxId:684b1cb3c06c71b58cd54ec27ef617a80358de57835e3e5e0339a9ca11c2027b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051828531927058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9b1ee5323bfe1840da003810cc9d9c,},Annotations:map[string]
string{io.kubernetes.container.hash: 6edda731,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d,PodSandboxId:3b5b194229c2c7fe1e495c6f82e1f6272ea3de03cdae2c86e18118d9b39e39c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051828464031375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5de83de9c9b65f4bd1f185efdb900cd8,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4747eae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0,PodSandboxId:a9c52f9deb0ad607e612f6e5742a572b8842b510864fae5f6e54e66298537791,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051828440157316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4dc13dfdc5f0b7be029f80782c2101d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d,PodSandboxId:1473c2739c681ceb03807ba56c9a6b5be4cdce1658019601cb3598841e6b67a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051828450158559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-687975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a5003154c341c16886b3b155673039,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e39d7b66-562a-4158-adc7-27dd4987a1bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.805248998Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.805573349Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\"" file="docker/docker_image_src.go:87"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.805835453Z" level=debug msg="No credentials matching fake.domain/registry.k8s.io/echoserver found in /run/containers/0/auth.json" file="config/config.go:846"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.805954219Z" level=debug msg="No credentials matching fake.domain/registry.k8s.io/echoserver found in /root/.config/containers/auth.json" file="config/config.go:846"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806050223Z" level=debug msg="No credentials matching fake.domain/registry.k8s.io/echoserver found in /root/.docker/config.json" file="config/config.go:846"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806147507Z" level=debug msg="No credentials matching fake.domain/registry.k8s.io/echoserver found in /root/.dockercfg" file="config/config.go:846"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806226251Z" level=debug msg="No credentials for fake.domain/registry.k8s.io/echoserver found" file="config/config.go:272"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806341017Z" level=debug msg=" No signature storage configuration found for fake.domain/registry.k8s.io/echoserver:1.4, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806528132Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/fake.domain" file="tlsclientconfig/tlsclientconfig.go:20"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.806755267Z" level=debug msg="GET https://fake.domain/v2/" file="docker/docker_client.go:631"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.811317230Z" level=debug msg="Ping https://fake.domain/v2/ err Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host (&url.Error{Op:\"Get\", URL:\"https://fake.domain/v2/\", Err:(*net.OpError)(0xc000ca6730)})" file="docker/docker_client.go:897"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.811712444Z" level=debug msg="GET https://fake.domain/v1/_ping" file="docker/docker_client.go:631"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.815836439Z" level=debug msg="Ping https://fake.domain/v1/_ping err Get \"https://fake.domain/v1/_ping\": dial tcp: lookup fake.domain: no such host (&url.Error{Op:\"Get\", URL:\"https://fake.domain/v1/_ping\", Err:(*net.OpError)(0xc000ca6910)})" file="docker/docker_client.go:927"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.816364749Z" level=debug msg="Accessing \"fake.domain/registry.k8s.io/echoserver:1.4\" failed: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="docker/docker_image_src.go:95"
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.816487597Z" level=debug msg="Error preparing image fake.domain/registry.k8s.io/echoserver:1.4: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="server/image_pull.go:213" id=e73f1baa-0244-4fc1-b9e6-fa7bcb018e0a name=/runtime.v1.ImageService/PullImage
	Jul 04 00:31:33 embed-certs-687975 crio[732]: time="2024-07-04 00:31:33.816784122Z" level=debug msg="Response error: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="otel-collector/interceptors.go:71" id=e73f1baa-0244-4fc1-b9e6-fa7bcb018e0a name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5718f2328eaa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   b368293f3bee8       storage-provisioner
	56bf46a96bbb2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   06e56aaafbfc9       busybox
	ccbd6757ef6ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   e7874c8a943a5       coredns-7db6d8ff4d-2bn7d
	0a20f1a805446       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   b368293f3bee8       storage-provisioner
	0758cc11c578a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      21 minutes ago      Running             kube-proxy                1                   739523f0f3056       kube-proxy-9phtm
	e2490c1548394       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   684b1cb3c06c7       etcd-embed-certs-687975
	2c26905e98271       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      21 minutes ago      Running             kube-apiserver            1                   3b5b194229c2c       kube-apiserver-embed-certs-687975
	49302273be8ed       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      21 minutes ago      Running             kube-controller-manager   1                   1473c2739c681       kube-controller-manager-embed-certs-687975
	bac9db9686284       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      21 minutes ago      Running             kube-scheduler            1                   a9c52f9deb0ad       kube-scheduler-embed-certs-687975
	
	
	==> coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46306 - 23492 "HINFO IN 5648278252653877547.872263897006372740. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014111921s
	
	
	==> describe nodes <==
	Name:               embed-certs-687975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-687975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=embed-certs-687975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_02_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:02:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-687975
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:31:26 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:31:26 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:31:26 +0000   Thu, 04 Jul 2024 00:02:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:31:26 +0000   Thu, 04 Jul 2024 00:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    embed-certs-687975
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9014716dc5654ce3b2e482f446692f40
	  System UUID:                9014716d-c565-4ce3-b2e4-82f446692f40
	  Boot ID:                    fb68e1e1-c3e6-484a-ae3e-33a5a2249f14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-2bn7d                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-687975                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-687975             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-687975    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-9phtm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-687975             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-jpmsg               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-687975 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-687975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-687975 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node embed-certs-687975 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-687975 event: Registered Node embed-certs-687975 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-687975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-687975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-687975 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-687975 event: Registered Node embed-certs-687975 in Controller
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051582] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.607478] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.474040] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.563580] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.200328] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.060691] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067953] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.207358] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.128405] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.304639] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.723742] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.069454] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.398550] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +4.606019] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.001423] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +4.519621] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.635377] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] <==
	{"level":"info","ts":"2024-07-04T00:11:32.730704Z","caller":"traceutil/trace.go:171","msg":"trace[1116024264] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:631; }","duration":"124.381121ms","start":"2024-07-04T00:11:32.606309Z","end":"2024-07-04T00:11:32.73069Z","steps":["trace[1116024264] 'agreement among raft nodes before linearized reading'  (duration: 124.140039ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:11:32.730957Z","caller":"traceutil/trace.go:171","msg":"trace[975488001] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"176.214844ms","start":"2024-07-04T00:11:32.55473Z","end":"2024-07-04T00:11:32.730945Z","steps":["trace[975488001] 'process raft request'  (duration: 175.487421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:20:30.320138Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-07-04T00:20:30.333661Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":834,"took":"12.570268ms","hash":1268322157,"current-db-size-bytes":2736128,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2736128,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-04T00:20:30.333757Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268322157,"revision":834,"compact-revision":-1}
	{"level":"info","ts":"2024-07-04T00:25:30.327654Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1076}
	{"level":"info","ts":"2024-07-04T00:25:30.331815Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1076,"took":"3.865439ms","hash":260465837,"current-db-size-bytes":2736128,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1699840,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-04T00:25:30.331864Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":260465837,"revision":1076,"compact-revision":834}
	{"level":"warn","ts":"2024-07-04T00:30:23.2652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.308296ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:30:23.265467Z","caller":"traceutil/trace.go:171","msg":"trace[1325372819] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1555; }","duration":"245.61882ms","start":"2024-07-04T00:30:23.019801Z","end":"2024-07-04T00:30:23.26542Z","steps":["trace[1325372819] 'range keys from in-memory index tree'  (duration: 245.293277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:30:23.265905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.368967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.213\" ","response":"range_response_count:1 size:134"}
	{"level":"info","ts":"2024-07-04T00:30:23.266438Z","caller":"traceutil/trace.go:171","msg":"trace[1376864584] range","detail":"{range_begin:/registry/masterleases/192.168.39.213; range_end:; response_count:1; response_revision:1555; }","duration":"222.836277ms","start":"2024-07-04T00:30:23.043479Z","end":"2024-07-04T00:30:23.266315Z","steps":["trace[1376864584] 'range keys from in-memory index tree'  (duration: 222.235257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:30:23.669874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.216198ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250271017933794012 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.213\" mod_revision:1548 > success:<request_put:<key:\"/registry/masterleases/192.168.39.213\" value_size:67 lease:26898981079018202 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.213\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-04T00:30:23.670298Z","caller":"traceutil/trace.go:171","msg":"trace[944676766] transaction","detail":"{read_only:false; response_revision:1556; number_of_response:1; }","duration":"274.134829ms","start":"2024-07-04T00:30:23.396144Z","end":"2024-07-04T00:30:23.670279Z","steps":["trace[944676766] 'process raft request'  (duration: 130.413812ms)","trace[944676766] 'compare'  (duration: 143.122229ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-04T00:30:30.340921Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1320}
	{"level":"info","ts":"2024-07-04T00:30:30.345883Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1320,"took":"4.648009ms","hash":1902996217,"current-db-size-bytes":2736128,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1646592,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-04T00:30:30.345956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1902996217,"revision":1320,"compact-revision":1076}
	{"level":"info","ts":"2024-07-04T00:31:06.654691Z","caller":"traceutil/trace.go:171","msg":"trace[230281728] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"138.929786ms","start":"2024-07-04T00:31:06.515742Z","end":"2024-07-04T00:31:06.654672Z","steps":["trace[230281728] 'process raft request'  (duration: 138.73725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:31:07.360113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.971155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:31:07.360191Z","caller":"traceutil/trace.go:171","msg":"trace[184235870] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1594; }","duration":"104.09769ms","start":"2024-07-04T00:31:07.256082Z","end":"2024-07-04T00:31:07.36018Z","steps":["trace[184235870] 'range keys from in-memory index tree'  (duration: 103.852786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:31:28.889983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.593515ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250271017933794331 > lease_revoke:<id:005f907b1408ebd2>","response":"size:28"}
	{"level":"info","ts":"2024-07-04T00:31:28.890085Z","caller":"traceutil/trace.go:171","msg":"trace[858379213] linearizableReadLoop","detail":"{readStateIndex:1916; appliedIndex:1915; }","duration":"285.958587ms","start":"2024-07-04T00:31:28.604112Z","end":"2024-07-04T00:31:28.890071Z","steps":["trace[858379213] 'read index received'  (duration: 33.218025ms)","trace[858379213] 'applied index is now lower than readState.Index'  (duration: 252.739293ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:31:28.890138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.016124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:31:28.890154Z","caller":"traceutil/trace.go:171","msg":"trace[269909770] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1611; }","duration":"286.065204ms","start":"2024-07-04T00:31:28.604082Z","end":"2024-07-04T00:31:28.890147Z","steps":["trace[269909770] 'agreement among raft nodes before linearized reading'  (duration: 286.021704ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:31:29.582028Z","caller":"traceutil/trace.go:171","msg":"trace[1620051028] transaction","detail":"{read_only:false; response_revision:1612; number_of_response:1; }","duration":"268.839617ms","start":"2024-07-04T00:31:29.313166Z","end":"2024-07-04T00:31:29.582006Z","steps":["trace[1620051028] 'process raft request'  (duration: 268.695066ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:31:34 up 21 min,  0 users,  load average: 0.27, 0.24, 0.16
	Linux embed-certs-687975 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] <==
	W0704 00:28:32.738938       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:28:32.738985       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:28:32.738994       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0704 00:30:23.671957       1 trace.go:236] Trace[433209719]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.213,type:*v1.Endpoints,resource:apiServerIPInfo (04-Jul-2024 00:30:23.042) (total time: 628ms):
	Trace[433209719]: ---"initial value restored" 224ms (00:30:23.267)
	Trace[433209719]: ---"Transaction prepared" 128ms (00:30:23.395)
	Trace[433209719]: ---"Txn call completed" 276ms (00:30:23.671)
	Trace[433209719]: [628.911821ms] [628.911821ms] END
	W0704 00:30:31.741698       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:30:31.741928       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:30:32.743031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:30:32.743169       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:30:32.743209       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:30:32.743293       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:30:32.743380       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:30:32.744561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:31:32.743711       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:31:32.743760       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:31:32.743772       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:31:32.744916       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:31:32.744995       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:31:32.745004       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] <==
	I0704 00:25:45.100996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:26:14.484259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:26:15.109096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:26:44.489671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:26:44.812674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="406.544µs"
	I0704 00:26:45.117589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:26:57.814946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="86.434µs"
	E0704 00:27:14.496112       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:15.125562       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:27:44.501710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:45.133320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:28:14.507758       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:15.142730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:28:44.513849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:45.151480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:14.519523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:15.159228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:44.525681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:45.167460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:30:14.532839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:30:15.175534       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:30:44.537924       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:30:45.186039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:31:14.542399       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:31:15.194493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] <==
	I0704 00:10:32.381899       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:10:32.394715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.213"]
	I0704 00:10:32.435832       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:10:32.435944       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:10:32.435975       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:10:32.441122       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:10:32.441470       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:10:32.441915       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:10:32.443581       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:10:32.444740       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:10:32.444721       1 config.go:192] "Starting service config controller"
	I0704 00:10:32.445009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:10:32.446250       1 config.go:319] "Starting node config controller"
	I0704 00:10:32.447250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:10:32.544938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:10:32.545194       1 shared_informer.go:320] Caches are synced for service config
	I0704 00:10:32.547519       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] <==
	I0704 00:10:29.257935       1 serving.go:380] Generated self-signed cert in-memory
	W0704 00:10:31.709203       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0704 00:10:31.709351       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0704 00:10:31.709452       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0704 00:10:31.709477       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0704 00:10:31.781007       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0704 00:10:31.781220       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:10:31.791818       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0704 00:10:31.791873       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0704 00:10:31.792495       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0704 00:10:31.792594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0704 00:10:31.893865       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:29:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:29:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:29:41 embed-certs-687975 kubelet[943]: E0704 00:29:41.797203     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:29:52 embed-certs-687975 kubelet[943]: E0704 00:29:52.795873     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:30:04 embed-certs-687975 kubelet[943]: E0704 00:30:04.796386     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:30:16 embed-certs-687975 kubelet[943]: E0704 00:30:16.797302     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:30:27 embed-certs-687975 kubelet[943]: E0704 00:30:27.820887     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:30:27 embed-certs-687975 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:30:27 embed-certs-687975 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:30:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:30:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:30:29 embed-certs-687975 kubelet[943]: E0704 00:30:29.796127     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:30:44 embed-certs-687975 kubelet[943]: E0704 00:30:44.796176     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:30:55 embed-certs-687975 kubelet[943]: E0704 00:30:55.796449     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:31:09 embed-certs-687975 kubelet[943]: E0704 00:31:09.796966     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:31:21 embed-certs-687975 kubelet[943]: E0704 00:31:21.798019     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	Jul 04 00:31:27 embed-certs-687975 kubelet[943]: E0704 00:31:27.819076     943 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:31:27 embed-certs-687975 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:31:27 embed-certs-687975 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:31:27 embed-certs-687975 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:31:27 embed-certs-687975 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:31:33 embed-certs-687975 kubelet[943]: E0704 00:31:33.817174     943 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:31:33 embed-certs-687975 kubelet[943]: E0704 00:31:33.817244     943 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:31:33 embed-certs-687975 kubelet[943]: E0704 00:31:33.817456     943 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhj7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-jpmsg_kube-system(e2561edc-d580-461c-acae-218e6b7a2f67): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 04 00:31:33 embed-certs-687975 kubelet[943]: E0704 00:31:33.817505     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-jpmsg" podUID="e2561edc-d580-461c-acae-218e6b7a2f67"
	
	
	==> storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] <==
	I0704 00:10:32.372675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0704 00:11:02.381272       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] <==
	I0704 00:11:03.122539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:11:03.136448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:11:03.136845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:11:20.543458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:11:20.543693       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0!
	I0704 00:11:20.543752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2336027a-6017-42cd-8bce-095b0142a30c", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0 became leader
	I0704 00:11:20.644523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-687975_2deddaea-fbee-4f84-ab29-345da0a3acd0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687975 -n embed-certs-687975
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-687975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jpmsg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg: exit status 1 (64.855792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jpmsg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-687975 describe pod metrics-server-569cc877fc-jpmsg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (451.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
E0704 00:32:39.384205   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.crt: no such file or directory
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:32:39.450279297 +0000 UTC m=+6352.383517057
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-995404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.811µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-995404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-995404 logs -n 25
E0704 00:32:40.664767   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-995404 logs -n 25: (1.508323259s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| start   | -p calico-676605 --memory=3072                       | calico-676605 | jenkins | v1.33.1 | 04 Jul 24 00:31 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	| ssh     | -p auto-676605 pgrep -a                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | kubelet                                              |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/nsswitch.conf                                   |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/hosts                                           |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/resolv.conf                                     |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo crictl                           | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | pods                                                 |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo crictl ps                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | --all                                                |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo find                             | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/cni -type f -exec sh -c                         |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo ip a s                           | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	| ssh     | -p auto-676605 sudo ip r s                           | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	| ssh     | -p auto-676605 sudo                                  | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | iptables-save                                        |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo iptables                         | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo journalctl                       | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo docker                           | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo systemctl                        | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC | 04 Jul 24 00:32 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-676605 sudo cat                              | auto-676605   | jenkins | v1.33.1 | 04 Jul 24 00:32 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:31:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:31:42.633173   71333 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:31:42.633405   71333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:31:42.633413   71333 out.go:304] Setting ErrFile to fd 2...
	I0704 00:31:42.633417   71333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:31:42.633626   71333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:31:42.634202   71333 out.go:298] Setting JSON to false
	I0704 00:31:42.635089   71333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8043,"bootTime":1720045060,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:31:42.635151   71333 start.go:139] virtualization: kvm guest
	I0704 00:31:42.637506   71333 out.go:177] * [calico-676605] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:31:42.639198   71333 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:31:42.639191   71333 notify.go:220] Checking for updates...
	I0704 00:31:42.641985   71333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:31:42.643347   71333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:31:42.644731   71333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:31:42.646227   71333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:31:42.647605   71333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:31:37.936822   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:39.937787   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:42.437683   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:42.649874   71333 config.go:182] Loaded profile config "auto-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:31:42.650027   71333 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:31:42.650155   71333 config.go:182] Loaded profile config "kindnet-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:31:42.650289   71333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:31:42.691551   71333 out.go:177] * Using the kvm2 driver based on user configuration
	I0704 00:31:42.692958   71333 start.go:297] selected driver: kvm2
	I0704 00:31:42.692975   71333 start.go:901] validating driver "kvm2" against <nil>
	I0704 00:31:42.692987   71333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:31:42.693720   71333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:31:42.693830   71333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:31:42.710370   71333 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:31:42.710423   71333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0704 00:31:42.710629   71333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:31:42.710660   71333 cni.go:84] Creating CNI manager for "calico"
	I0704 00:31:42.710666   71333 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0704 00:31:42.710708   71333 start.go:340] cluster config:
	{Name:calico-676605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:31:42.710790   71333 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:31:42.713303   71333 out.go:177] * Starting "calico-676605" primary control-plane node in "calico-676605" cluster
	I0704 00:31:42.194487   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:42.195073   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:42.195103   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:42.195055   70839 retry.go:31] will retry after 913.367707ms: waiting for machine to come up
	I0704 00:31:43.110140   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:43.110712   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:43.110742   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:43.110658   70839 retry.go:31] will retry after 1.795609073s: waiting for machine to come up
	I0704 00:31:44.907632   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:44.908169   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:44.908199   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:44.908116   70839 retry.go:31] will retry after 1.991840833s: waiting for machine to come up
	I0704 00:31:42.714673   71333 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:31:42.714739   71333 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:31:42.714746   71333 cache.go:56] Caching tarball of preloaded images
	I0704 00:31:42.714856   71333 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:31:42.714868   71333 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:31:42.714956   71333 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/config.json ...
	I0704 00:31:42.714973   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/config.json: {Name:mk3712337a49a75b42a6b67f3448cbf0bd3b611a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:31:42.715128   71333 start.go:360] acquireMachinesLock for calico-676605: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:31:43.436915   69888 pod_ready.go:97] pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:43 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.17 HostIPs:[{IP:192.168.61.
17}] PodIP: PodIPs:[] StartTime:2024-07-04 00:31:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-04 00:31:32 +0000 UTC,FinishedAt:2024-07-04 00:31:43 +0000 UTC,ContainerID:cri-o://340f0bc43d7209fbcda34c544f3a886f86ce2b8ee01045aa98eacf4e0f4b9ce7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://340f0bc43d7209fbcda34c544f3a886f86ce2b8ee01045aa98eacf4e0f4b9ce7 Started:0xc0023bbb70 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0704 00:31:43.436965   69888 pod_ready.go:81] duration metric: took 12.008085734s for pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace to be "Ready" ...
	E0704 00:31:43.436981   69888 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-dp4r9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:43 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-04 00:31:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.17 HostIPs:[{IP:192.168.61.17}] PodIP: PodIPs:[] StartTime:2024-07-04 00:31:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-04 00:31:32 +0000 UTC,FinishedAt:2024-07-04 00:31:43 +0000 UTC,ContainerID:cri-o://340f0bc43d7209fbcda34c544f3a886f86ce2b8ee01045aa98eacf4e0f4b9ce7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://340f0bc43d7209fbcda34c544f3a886f86ce2b8ee01045aa98eacf4e0f4b9ce7 Started:0xc0023bbb70 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0704 00:31:43.436992   69888 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace to be "Ready" ...
	I0704 00:31:45.444592   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:46.901168   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:46.901748   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:46.901784   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:46.901705   70839 retry.go:31] will retry after 2.125893665s: waiting for machine to come up
	I0704 00:31:49.028844   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:49.029211   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:49.029241   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:49.029159   70839 retry.go:31] will retry after 2.252318394s: waiting for machine to come up
	I0704 00:31:47.944538   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:50.445101   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:51.282881   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:51.283402   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:51.283442   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:51.283359   70839 retry.go:31] will retry after 3.454698053s: waiting for machine to come up
	I0704 00:31:54.741393   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:31:54.741866   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find current IP address of domain kindnet-676605 in network mk-kindnet-676605
	I0704 00:31:54.741892   70815 main.go:141] libmachine: (kindnet-676605) DBG | I0704 00:31:54.741819   70839 retry.go:31] will retry after 5.436837167s: waiting for machine to come up
	I0704 00:31:52.943614   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:54.943810   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:31:56.943979   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:00.181620   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.182196   70815 main.go:141] libmachine: (kindnet-676605) Found IP for machine: 192.168.39.227
	I0704 00:32:00.182219   70815 main.go:141] libmachine: (kindnet-676605) Reserving static IP address...
	I0704 00:32:00.182232   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has current primary IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.182637   70815 main.go:141] libmachine: (kindnet-676605) DBG | unable to find host DHCP lease matching {name: "kindnet-676605", mac: "52:54:00:b0:c7:f6", ip: "192.168.39.227"} in network mk-kindnet-676605
	I0704 00:32:00.266541   70815 main.go:141] libmachine: (kindnet-676605) DBG | Getting to WaitForSSH function...
	I0704 00:32:00.266595   70815 main.go:141] libmachine: (kindnet-676605) Reserved static IP address: 192.168.39.227
	I0704 00:32:00.266610   70815 main.go:141] libmachine: (kindnet-676605) Waiting for SSH to be available...
	I0704 00:32:00.269282   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.269665   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.269692   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.269855   70815 main.go:141] libmachine: (kindnet-676605) DBG | Using SSH client type: external
	I0704 00:32:00.269898   70815 main.go:141] libmachine: (kindnet-676605) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa (-rw-------)
	I0704 00:32:00.269931   70815 main.go:141] libmachine: (kindnet-676605) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:32:00.269947   70815 main.go:141] libmachine: (kindnet-676605) DBG | About to run SSH command:
	I0704 00:32:00.269961   70815 main.go:141] libmachine: (kindnet-676605) DBG | exit 0
	I0704 00:32:00.396319   70815 main.go:141] libmachine: (kindnet-676605) DBG | SSH cmd err, output: <nil>: 
	I0704 00:32:00.396619   70815 main.go:141] libmachine: (kindnet-676605) KVM machine creation complete!
	I0704 00:32:00.396972   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetConfigRaw
	I0704 00:32:00.397553   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:00.397755   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:00.397969   70815 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:32:00.397984   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetState
	I0704 00:32:00.399224   70815 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:32:00.399238   70815 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:32:00.399246   70815 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:32:00.399251   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:00.401541   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.401912   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.401940   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.402104   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:00.402313   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.402469   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.402599   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:00.402755   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:00.402950   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:00.402962   70815 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:32:00.511437   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:32:00.511459   70815 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:32:00.511469   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:00.514389   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.514807   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.514841   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.514949   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:00.515144   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.515280   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.515441   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:00.515588   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:00.515762   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:00.515773   70815 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:32:00.624699   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:32:00.624787   70815 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:32:00.624797   70815 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:32:00.624804   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetMachineName
	I0704 00:32:00.625127   70815 buildroot.go:166] provisioning hostname "kindnet-676605"
	I0704 00:32:00.625155   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetMachineName
	I0704 00:32:00.625416   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:00.628097   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.628499   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.628527   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.628650   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:00.628857   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.629034   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.629207   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:00.629342   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:00.629546   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:00.629560   70815 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-676605 && echo "kindnet-676605" | sudo tee /etc/hostname
	I0704 00:32:00.757244   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-676605
	
	I0704 00:32:00.757280   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:00.760229   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.760636   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.760679   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.760830   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:00.761002   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.761117   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:00.761306   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:00.761424   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:00.761617   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:00.761631   70815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-676605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-676605/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-676605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:32:00.877912   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:32:00.877946   70815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:32:00.878036   70815 buildroot.go:174] setting up certificates
	I0704 00:32:00.878047   70815 provision.go:84] configureAuth start
	I0704 00:32:00.878061   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetMachineName
	I0704 00:32:00.878437   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetIP
	I0704 00:32:00.881258   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.881647   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.881686   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.881902   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:00.884447   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.884826   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:00.884866   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:00.885072   70815 provision.go:143] copyHostCerts
	I0704 00:32:00.885132   70815 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:32:00.885145   70815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:32:00.885234   70815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:32:00.885347   70815 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:32:00.885357   70815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:32:00.885393   70815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:32:00.885468   70815 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:32:00.885477   70815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:32:00.885511   70815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:32:00.885576   70815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.kindnet-676605 san=[127.0.0.1 192.168.39.227 kindnet-676605 localhost minikube]
	I0704 00:32:01.042804   70815 provision.go:177] copyRemoteCerts
	I0704 00:32:01.042865   70815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:32:01.042885   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.045698   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.046003   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.046036   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.046226   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.046453   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.046666   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.046839   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:01.785690   71333 start.go:364] duration metric: took 19.070537988s to acquireMachinesLock for "calico-676605"
	I0704 00:32:01.785771   71333 start.go:93] Provisioning new machine with config: &{Name:calico-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:calico-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:32:01.786081   71333 start.go:125] createHost starting for "" (driver="kvm2")
	I0704 00:32:01.788298   71333 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0704 00:32:01.788490   71333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:01.788550   71333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:01.805683   71333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0704 00:32:01.808288   71333 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:01.809101   71333 main.go:141] libmachine: Using API Version  1
	I0704 00:32:01.809126   71333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:01.809580   71333 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:01.809773   71333 main.go:141] libmachine: (calico-676605) Calling .GetMachineName
	I0704 00:32:01.809941   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:01.810099   71333 start.go:159] libmachine.API.Create for "calico-676605" (driver="kvm2")
	I0704 00:32:01.810140   71333 client.go:168] LocalClient.Create starting
	I0704 00:32:01.810170   71333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0704 00:32:01.810215   71333 main.go:141] libmachine: Decoding PEM data...
	I0704 00:32:01.810241   71333 main.go:141] libmachine: Parsing certificate...
	I0704 00:32:01.810302   71333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0704 00:32:01.810327   71333 main.go:141] libmachine: Decoding PEM data...
	I0704 00:32:01.810344   71333 main.go:141] libmachine: Parsing certificate...
	I0704 00:32:01.810372   71333 main.go:141] libmachine: Running pre-create checks...
	I0704 00:32:01.810384   71333 main.go:141] libmachine: (calico-676605) Calling .PreCreateCheck
	I0704 00:32:01.810805   71333 main.go:141] libmachine: (calico-676605) Calling .GetConfigRaw
	I0704 00:32:01.811240   71333 main.go:141] libmachine: Creating machine...
	I0704 00:32:01.811255   71333 main.go:141] libmachine: (calico-676605) Calling .Create
	I0704 00:32:01.811402   71333 main.go:141] libmachine: (calico-676605) Creating KVM machine...
	I0704 00:32:01.812893   71333 main.go:141] libmachine: (calico-676605) DBG | found existing default KVM network
	I0704 00:32:01.814471   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:01.814291   71455 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:19:f6:ce} reservation:<nil>}
	I0704 00:32:01.815271   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:01.815198   71455 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e0:97:b1} reservation:<nil>}
	I0704 00:32:01.816217   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:01.816117   71455 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:bd:78} reservation:<nil>}
	I0704 00:32:01.817638   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:01.817540   71455 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038f0b0}
	I0704 00:32:01.817720   71333 main.go:141] libmachine: (calico-676605) DBG | created network xml: 
	I0704 00:32:01.817742   71333 main.go:141] libmachine: (calico-676605) DBG | <network>
	I0704 00:32:01.817750   71333 main.go:141] libmachine: (calico-676605) DBG |   <name>mk-calico-676605</name>
	I0704 00:32:01.817765   71333 main.go:141] libmachine: (calico-676605) DBG |   <dns enable='no'/>
	I0704 00:32:01.817774   71333 main.go:141] libmachine: (calico-676605) DBG |   
	I0704 00:32:01.817790   71333 main.go:141] libmachine: (calico-676605) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0704 00:32:01.817802   71333 main.go:141] libmachine: (calico-676605) DBG |     <dhcp>
	I0704 00:32:01.817812   71333 main.go:141] libmachine: (calico-676605) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0704 00:32:01.817824   71333 main.go:141] libmachine: (calico-676605) DBG |     </dhcp>
	I0704 00:32:01.817833   71333 main.go:141] libmachine: (calico-676605) DBG |   </ip>
	I0704 00:32:01.817845   71333 main.go:141] libmachine: (calico-676605) DBG |   
	I0704 00:32:01.817853   71333 main.go:141] libmachine: (calico-676605) DBG | </network>
	I0704 00:32:01.817875   71333 main.go:141] libmachine: (calico-676605) DBG | 
	I0704 00:32:01.824302   71333 main.go:141] libmachine: (calico-676605) DBG | trying to create private KVM network mk-calico-676605 192.168.72.0/24...
	I0704 00:32:01.909945   71333 main.go:141] libmachine: (calico-676605) DBG | private KVM network mk-calico-676605 192.168.72.0/24 created
	I0704 00:32:01.909988   71333 main.go:141] libmachine: (calico-676605) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605 ...
	I0704 00:32:01.910002   71333 main.go:141] libmachine: (calico-676605) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0704 00:32:01.910014   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:01.909922   71455 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:32:01.910181   71333 main.go:141] libmachine: (calico-676605) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0704 00:32:02.170917   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:02.170761   71455 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa...
	I0704 00:32:02.377348   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:02.377189   71455 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/calico-676605.rawdisk...
	I0704 00:32:02.377385   71333 main.go:141] libmachine: (calico-676605) DBG | Writing magic tar header
	I0704 00:32:02.377402   71333 main.go:141] libmachine: (calico-676605) DBG | Writing SSH key tar header
	I0704 00:32:02.377416   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:02.377342   71455 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605 ...
	I0704 00:32:02.377523   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605
	I0704 00:32:02.377558   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0704 00:32:02.377571   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605 (perms=drwx------)
	I0704 00:32:02.377586   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0704 00:32:02.377601   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0704 00:32:02.377618   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0704 00:32:02.377632   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0704 00:32:02.377646   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:32:02.377665   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0704 00:32:02.377677   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0704 00:32:02.377692   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home/jenkins
	I0704 00:32:02.377705   71333 main.go:141] libmachine: (calico-676605) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0704 00:32:02.377717   71333 main.go:141] libmachine: (calico-676605) Creating domain...
	I0704 00:32:02.377729   71333 main.go:141] libmachine: (calico-676605) DBG | Checking permissions on dir: /home
	I0704 00:32:02.377743   71333 main.go:141] libmachine: (calico-676605) DBG | Skipping /home - not owner
	I0704 00:32:02.378904   71333 main.go:141] libmachine: (calico-676605) define libvirt domain using xml: 
	I0704 00:32:02.378923   71333 main.go:141] libmachine: (calico-676605) <domain type='kvm'>
	I0704 00:32:02.378933   71333 main.go:141] libmachine: (calico-676605)   <name>calico-676605</name>
	I0704 00:32:02.378940   71333 main.go:141] libmachine: (calico-676605)   <memory unit='MiB'>3072</memory>
	I0704 00:32:02.378954   71333 main.go:141] libmachine: (calico-676605)   <vcpu>2</vcpu>
	I0704 00:32:02.378965   71333 main.go:141] libmachine: (calico-676605)   <features>
	I0704 00:32:02.378994   71333 main.go:141] libmachine: (calico-676605)     <acpi/>
	I0704 00:32:02.379004   71333 main.go:141] libmachine: (calico-676605)     <apic/>
	I0704 00:32:02.379012   71333 main.go:141] libmachine: (calico-676605)     <pae/>
	I0704 00:32:02.379021   71333 main.go:141] libmachine: (calico-676605)     
	I0704 00:32:02.379032   71333 main.go:141] libmachine: (calico-676605)   </features>
	I0704 00:32:02.379040   71333 main.go:141] libmachine: (calico-676605)   <cpu mode='host-passthrough'>
	I0704 00:32:02.379064   71333 main.go:141] libmachine: (calico-676605)   
	I0704 00:32:02.379088   71333 main.go:141] libmachine: (calico-676605)   </cpu>
	I0704 00:32:02.379097   71333 main.go:141] libmachine: (calico-676605)   <os>
	I0704 00:32:02.379109   71333 main.go:141] libmachine: (calico-676605)     <type>hvm</type>
	I0704 00:32:02.379120   71333 main.go:141] libmachine: (calico-676605)     <boot dev='cdrom'/>
	I0704 00:32:02.379131   71333 main.go:141] libmachine: (calico-676605)     <boot dev='hd'/>
	I0704 00:32:02.379143   71333 main.go:141] libmachine: (calico-676605)     <bootmenu enable='no'/>
	I0704 00:32:02.379152   71333 main.go:141] libmachine: (calico-676605)   </os>
	I0704 00:32:02.379161   71333 main.go:141] libmachine: (calico-676605)   <devices>
	I0704 00:32:02.379169   71333 main.go:141] libmachine: (calico-676605)     <disk type='file' device='cdrom'>
	I0704 00:32:02.379183   71333 main.go:141] libmachine: (calico-676605)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/boot2docker.iso'/>
	I0704 00:32:02.379204   71333 main.go:141] libmachine: (calico-676605)       <target dev='hdc' bus='scsi'/>
	I0704 00:32:02.379215   71333 main.go:141] libmachine: (calico-676605)       <readonly/>
	I0704 00:32:02.379223   71333 main.go:141] libmachine: (calico-676605)     </disk>
	I0704 00:32:02.379240   71333 main.go:141] libmachine: (calico-676605)     <disk type='file' device='disk'>
	I0704 00:32:02.379265   71333 main.go:141] libmachine: (calico-676605)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0704 00:32:02.379303   71333 main.go:141] libmachine: (calico-676605)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/calico-676605.rawdisk'/>
	I0704 00:32:02.379326   71333 main.go:141] libmachine: (calico-676605)       <target dev='hda' bus='virtio'/>
	I0704 00:32:02.379341   71333 main.go:141] libmachine: (calico-676605)     </disk>
	I0704 00:32:02.379353   71333 main.go:141] libmachine: (calico-676605)     <interface type='network'>
	I0704 00:32:02.379366   71333 main.go:141] libmachine: (calico-676605)       <source network='mk-calico-676605'/>
	I0704 00:32:02.379376   71333 main.go:141] libmachine: (calico-676605)       <model type='virtio'/>
	I0704 00:32:02.379384   71333 main.go:141] libmachine: (calico-676605)     </interface>
	I0704 00:32:02.379396   71333 main.go:141] libmachine: (calico-676605)     <interface type='network'>
	I0704 00:32:02.379423   71333 main.go:141] libmachine: (calico-676605)       <source network='default'/>
	I0704 00:32:02.379453   71333 main.go:141] libmachine: (calico-676605)       <model type='virtio'/>
	I0704 00:32:02.379467   71333 main.go:141] libmachine: (calico-676605)     </interface>
	I0704 00:32:02.379478   71333 main.go:141] libmachine: (calico-676605)     <serial type='pty'>
	I0704 00:32:02.379488   71333 main.go:141] libmachine: (calico-676605)       <target port='0'/>
	I0704 00:32:02.379494   71333 main.go:141] libmachine: (calico-676605)     </serial>
	I0704 00:32:02.379506   71333 main.go:141] libmachine: (calico-676605)     <console type='pty'>
	I0704 00:32:02.379514   71333 main.go:141] libmachine: (calico-676605)       <target type='serial' port='0'/>
	I0704 00:32:02.379538   71333 main.go:141] libmachine: (calico-676605)     </console>
	I0704 00:32:02.379552   71333 main.go:141] libmachine: (calico-676605)     <rng model='virtio'>
	I0704 00:32:02.379564   71333 main.go:141] libmachine: (calico-676605)       <backend model='random'>/dev/random</backend>
	I0704 00:32:02.379573   71333 main.go:141] libmachine: (calico-676605)     </rng>
	I0704 00:32:02.379580   71333 main.go:141] libmachine: (calico-676605)     
	I0704 00:32:02.379589   71333 main.go:141] libmachine: (calico-676605)     
	I0704 00:32:02.379597   71333 main.go:141] libmachine: (calico-676605)   </devices>
	I0704 00:32:02.379606   71333 main.go:141] libmachine: (calico-676605) </domain>
	I0704 00:32:02.379616   71333 main.go:141] libmachine: (calico-676605) 
	I0704 00:32:02.384265   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:b7:87:a5 in network default
	I0704 00:32:02.384897   71333 main.go:141] libmachine: (calico-676605) Ensuring networks are active...
	I0704 00:32:02.384921   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:02.385646   71333 main.go:141] libmachine: (calico-676605) Ensuring network default is active
	I0704 00:32:02.385957   71333 main.go:141] libmachine: (calico-676605) Ensuring network mk-calico-676605 is active
	I0704 00:32:02.386579   71333 main.go:141] libmachine: (calico-676605) Getting domain xml...
	I0704 00:32:02.387263   71333 main.go:141] libmachine: (calico-676605) Creating domain...
	I0704 00:31:59.444882   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:01.946027   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:01.137361   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:32:01.165603   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0704 00:32:01.192898   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:32:01.219767   70815 provision.go:87] duration metric: took 341.706958ms to configureAuth
	I0704 00:32:01.219800   70815 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:32:01.220013   70815 config.go:182] Loaded profile config "kindnet-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:32:01.220087   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.222857   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.223234   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.223258   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.223427   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.223661   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.223866   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.224050   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.224244   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:01.224413   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:01.224429   70815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:32:01.525786   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:32:01.525819   70815 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:32:01.526052   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetURL
	I0704 00:32:01.528133   70815 main.go:141] libmachine: (kindnet-676605) DBG | Using libvirt version 6000000
	I0704 00:32:01.530461   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.530709   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.530735   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.530998   70815 main.go:141] libmachine: Docker is up and running!
	I0704 00:32:01.531012   70815 main.go:141] libmachine: Reticulating splines...
	I0704 00:32:01.531019   70815 client.go:171] duration metric: took 25.334293047s to LocalClient.Create
	I0704 00:32:01.531039   70815 start.go:167] duration metric: took 25.334358018s to libmachine.API.Create "kindnet-676605"
	I0704 00:32:01.531052   70815 start.go:293] postStartSetup for "kindnet-676605" (driver="kvm2")
	I0704 00:32:01.531062   70815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:32:01.531079   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:01.531313   70815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:32:01.531336   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.533382   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.533777   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.533799   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.533983   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.534217   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.534384   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.534534   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:01.620225   70815 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:32:01.624910   70815 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:32:01.624935   70815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:32:01.625010   70815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:32:01.625110   70815 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:32:01.625230   70815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:32:01.637387   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:32:01.666093   70815 start.go:296] duration metric: took 135.027164ms for postStartSetup
	I0704 00:32:01.666146   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetConfigRaw
	I0704 00:32:01.666755   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetIP
	I0704 00:32:01.669477   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.669871   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.669898   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.670186   70815 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/config.json ...
	I0704 00:32:01.670405   70815 start.go:128] duration metric: took 25.496323792s to createHost
	I0704 00:32:01.670433   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.672850   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.673251   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.673281   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.673396   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.673600   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.673744   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.673864   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.673975   70815 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:01.674126   70815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0704 00:32:01.674140   70815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:32:01.785521   70815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720053121.773269906
	
	I0704 00:32:01.785543   70815 fix.go:216] guest clock: 1720053121.773269906
	I0704 00:32:01.785551   70815 fix.go:229] Guest: 2024-07-04 00:32:01.773269906 +0000 UTC Remote: 2024-07-04 00:32:01.670418436 +0000 UTC m=+25.620358857 (delta=102.85147ms)
	I0704 00:32:01.785594   70815 fix.go:200] guest clock delta is within tolerance: 102.85147ms
	I0704 00:32:01.785599   70815 start.go:83] releasing machines lock for "kindnet-676605", held for 25.611662561s
	I0704 00:32:01.785630   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:01.785937   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetIP
	I0704 00:32:01.789352   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.789874   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.789901   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.790127   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:01.790738   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:01.790917   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:01.790967   70815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:32:01.791011   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.791153   70815 ssh_runner.go:195] Run: cat /version.json
	I0704 00:32:01.791176   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:01.794187   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.794218   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.794569   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.794595   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.794645   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:01.794660   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:01.794785   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.794880   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:01.794968   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.795047   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:01.795106   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.795173   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:01.795249   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:01.795367   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:01.882045   70815 ssh_runner.go:195] Run: systemctl --version
	I0704 00:32:01.905581   70815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:32:02.080797   70815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:32:02.088070   70815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:32:02.088143   70815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:32:02.105377   70815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:32:02.105400   70815 start.go:494] detecting cgroup driver to use...
	I0704 00:32:02.105470   70815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:32:02.128541   70815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:32:02.144654   70815 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:32:02.144711   70815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:32:02.160066   70815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:32:02.177574   70815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:32:02.305513   70815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:32:02.464079   70815 docker.go:233] disabling docker service ...
	I0704 00:32:02.464145   70815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:32:02.480733   70815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:32:02.495926   70815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:32:02.645760   70815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:32:02.767847   70815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:32:02.784657   70815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:32:02.806664   70815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:32:02.806738   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.819158   70815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:32:02.819233   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.832606   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.845943   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.858410   70815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:32:02.870661   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.882713   70815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.902396   70815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:02.915894   70815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:32:02.928303   70815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:32:02.928379   70815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:32:02.946422   70815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:32:02.957984   70815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:32:03.074193   70815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:32:03.227093   70815 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:32:03.227161   70815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:32:03.233020   70815 start.go:562] Will wait 60s for crictl version
	I0704 00:32:03.233070   70815 ssh_runner.go:195] Run: which crictl
	I0704 00:32:03.238177   70815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:32:03.284551   70815 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:32:03.284625   70815 ssh_runner.go:195] Run: crio --version
	I0704 00:32:03.314617   70815 ssh_runner.go:195] Run: crio --version
	I0704 00:32:03.346789   70815 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:32:03.348130   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetIP
	I0704 00:32:03.351463   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:03.351977   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:03.352009   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:03.352359   70815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:32:03.357217   70815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:32:03.371923   70815 kubeadm.go:877] updating cluster {Name:kindnet-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:kindnet-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:32:03.372065   70815 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:32:03.372140   70815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:32:03.411374   70815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:32:03.411445   70815 ssh_runner.go:195] Run: which lz4
	I0704 00:32:03.416124   70815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:32:03.421087   70815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:32:03.421117   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:32:05.058829   70815 crio.go:462] duration metric: took 1.642737712s to copy over tarball
	I0704 00:32:05.058914   70815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:32:03.758291   71333 main.go:141] libmachine: (calico-676605) Waiting to get IP...
	I0704 00:32:03.759385   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:03.759907   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:03.760001   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:03.759906   71455 retry.go:31] will retry after 282.037546ms: waiting for machine to come up
	I0704 00:32:04.043630   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:04.044319   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:04.044353   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:04.044282   71455 retry.go:31] will retry after 368.952589ms: waiting for machine to come up
	I0704 00:32:04.414999   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:04.415630   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:04.415655   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:04.415554   71455 retry.go:31] will retry after 373.819063ms: waiting for machine to come up
	I0704 00:32:04.791441   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:04.792080   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:04.792104   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:04.791985   71455 retry.go:31] will retry after 439.282091ms: waiting for machine to come up
	I0704 00:32:05.232448   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:05.232946   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:05.232973   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:05.232924   71455 retry.go:31] will retry after 600.718725ms: waiting for machine to come up
	I0704 00:32:05.836575   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:05.837321   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:05.837374   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:05.837281   71455 retry.go:31] will retry after 790.008277ms: waiting for machine to come up
	I0704 00:32:06.629135   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:06.629628   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:06.629657   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:06.629588   71455 retry.go:31] will retry after 1.083077291s: waiting for machine to come up
	I0704 00:32:04.446754   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:06.945701   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:07.809952   70815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751011655s)
	I0704 00:32:07.809981   70815 crio.go:469] duration metric: took 2.751112875s to extract the tarball
	I0704 00:32:07.809994   70815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:32:07.858196   70815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:32:07.906811   70815 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:32:07.906839   70815 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:32:07.906848   70815 kubeadm.go:928] updating node { 192.168.39.227 8443 v1.30.2 crio true true} ...
	I0704 00:32:07.907004   70815 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-676605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:kindnet-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0704 00:32:07.907088   70815 ssh_runner.go:195] Run: crio config
	I0704 00:32:07.970333   70815 cni.go:84] Creating CNI manager for "kindnet"
	I0704 00:32:07.970360   70815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:32:07.970390   70815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-676605 NodeName:kindnet-676605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:32:07.970541   70815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-676605"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:32:07.970602   70815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:32:07.981638   70815 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:32:07.981713   70815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:32:07.993189   70815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0704 00:32:08.013690   70815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:32:08.032778   70815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0704 00:32:08.052404   70815 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0704 00:32:08.057400   70815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:32:08.072512   70815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:32:08.208540   70815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:32:08.228932   70815 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605 for IP: 192.168.39.227
	I0704 00:32:08.228954   70815 certs.go:194] generating shared ca certs ...
	I0704 00:32:08.228975   70815 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.229136   70815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:32:08.229233   70815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:32:08.229247   70815 certs.go:256] generating profile certs ...
	I0704 00:32:08.229315   70815 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.key
	I0704 00:32:08.229335   70815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.crt with IP's: []
	I0704 00:32:08.372150   70815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.crt ...
	I0704 00:32:08.372190   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.crt: {Name:mk2ae62387ee3bcbf15cacbe1a5342b8ae207671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.372390   70815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.key ...
	I0704 00:32:08.372404   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/client.key: {Name:mkb88f18c09e4627374246071f5c952fd0653e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.372509   70815 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key.89d43d4c
	I0704 00:32:08.372525   70815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt.89d43d4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0704 00:32:08.615371   70815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt.89d43d4c ...
	I0704 00:32:08.615400   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt.89d43d4c: {Name:mk8daba41f5722273b8c19cbe3c6015ccaed2a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.615571   70815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key.89d43d4c ...
	I0704 00:32:08.615586   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key.89d43d4c: {Name:mkc0b0b56d413d0f0258d20fe75048730201f731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.615663   70815 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt.89d43d4c -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt
	I0704 00:32:08.615731   70815 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key.89d43d4c -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key
	I0704 00:32:08.615779   70815 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.key
	I0704 00:32:08.615793   70815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.crt with IP's: []
	I0704 00:32:08.693254   70815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.crt ...
	I0704 00:32:08.693284   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.crt: {Name:mk1d34fb0aeb526ff8740aef153ab775c39cb230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.693451   70815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.key ...
	I0704 00:32:08.693461   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.key: {Name:mkf07eda441f723685ecd4751d61b4aeb27c7db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:08.693620   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:32:08.693656   70815 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:32:08.693666   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:32:08.693689   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:32:08.693712   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:32:08.693732   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:32:08.693766   70815 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:32:08.694428   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:32:08.724631   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:32:08.755123   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:32:08.788775   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:32:08.818607   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0704 00:32:08.846484   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:32:08.874301   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:32:08.903153   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/kindnet-676605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:32:08.946173   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:32:08.979812   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:32:09.014982   70815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:32:09.044795   70815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:32:09.064731   70815 ssh_runner.go:195] Run: openssl version
	I0704 00:32:09.072415   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:32:09.084959   70815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:09.091756   70815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:09.091823   70815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:09.099406   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:32:09.112778   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:32:09.125396   70815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:32:09.130910   70815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:32:09.130977   70815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:32:09.137666   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:32:09.149676   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:32:09.162771   70815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:32:09.168185   70815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:32:09.168292   70815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:32:09.174921   70815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:32:09.189041   70815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:32:09.194038   70815 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:32:09.194107   70815 kubeadm.go:391] StartCluster: {Name:kindnet-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:kindnet-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:32:09.194218   70815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:32:09.194283   70815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:32:09.240906   70815 cri.go:89] found id: ""
	I0704 00:32:09.240976   70815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:32:09.252239   70815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:32:09.264148   70815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:32:09.275376   70815 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:32:09.275405   70815 kubeadm.go:156] found existing configuration files:
	
	I0704 00:32:09.275458   70815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:32:09.286014   70815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:32:09.286103   70815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:32:09.297415   70815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:32:09.308106   70815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:32:09.308186   70815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:32:09.319506   70815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:32:09.330403   70815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:32:09.330477   70815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:32:09.341726   70815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:32:09.353111   70815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:32:09.353189   70815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:32:09.364912   70815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:32:09.431316   70815 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:32:09.431748   70815 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:32:09.603987   70815 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:32:09.604170   70815 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:32:09.604306   70815 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:32:09.876320   70815 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:32:09.879301   70815 out.go:204]   - Generating certificates and keys ...
	I0704 00:32:09.879436   70815 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:32:09.879549   70815 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:32:10.039288   70815 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0704 00:32:10.386459   70815 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0704 00:32:10.753658   70815 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0704 00:32:10.947969   70815 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0704 00:32:07.714712   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:07.715319   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:07.715346   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:07.715275   71455 retry.go:31] will retry after 1.236932638s: waiting for machine to come up
	I0704 00:32:08.953552   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:08.954116   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:08.954145   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:08.954065   71455 retry.go:31] will retry after 1.778602494s: waiting for machine to come up
	I0704 00:32:10.734678   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:10.735243   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:10.735280   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:10.735168   71455 retry.go:31] will retry after 1.535641476s: waiting for machine to come up
	I0704 00:32:12.272386   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:12.272827   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:12.272854   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:12.272783   71455 retry.go:31] will retry after 2.102119697s: waiting for machine to come up
	I0704 00:32:09.094821   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:11.445136   69888 pod_ready.go:102] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"False"
	I0704 00:32:11.945868   69888 pod_ready.go:92] pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:11.945897   69888 pod_ready.go:81] duration metric: took 28.508894047s for pod "coredns-7db6d8ff4d-jj68c" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.945909   69888 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.958492   69888 pod_ready.go:92] pod "etcd-auto-676605" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:11.958521   69888 pod_ready.go:81] duration metric: took 12.603542ms for pod "etcd-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.958534   69888 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.965207   69888 pod_ready.go:92] pod "kube-apiserver-auto-676605" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:11.965232   69888 pod_ready.go:81] duration metric: took 6.689149ms for pod "kube-apiserver-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.965246   69888 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.971914   69888 pod_ready.go:92] pod "kube-controller-manager-auto-676605" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:11.971938   69888 pod_ready.go:81] duration metric: took 6.683735ms for pod "kube-controller-manager-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.971950   69888 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-jtdsq" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.977602   69888 pod_ready.go:92] pod "kube-proxy-jtdsq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:11.977626   69888 pod_ready.go:81] duration metric: took 5.668247ms for pod "kube-proxy-jtdsq" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:11.977637   69888 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:12.341884   69888 pod_ready.go:92] pod "kube-scheduler-auto-676605" in "kube-system" namespace has status "Ready":"True"
	I0704 00:32:12.341916   69888 pod_ready.go:81] duration metric: took 364.270675ms for pod "kube-scheduler-auto-676605" in "kube-system" namespace to be "Ready" ...
	I0704 00:32:12.341928   69888 pod_ready.go:38] duration metric: took 40.921910651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:32:12.341951   69888 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:32:12.342013   69888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:32:12.367996   69888 api_server.go:72] duration metric: took 41.305560174s to wait for apiserver process to appear ...
	I0704 00:32:12.368033   69888 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:32:12.368062   69888 api_server.go:253] Checking apiserver healthz at https://192.168.61.17:8443/healthz ...
	I0704 00:32:12.372652   69888 api_server.go:279] https://192.168.61.17:8443/healthz returned 200:
	ok
	I0704 00:32:12.374020   69888 api_server.go:141] control plane version: v1.30.2
	I0704 00:32:12.374048   69888 api_server.go:131] duration metric: took 6.006238ms to wait for apiserver health ...
	I0704 00:32:12.374058   69888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:32:12.543648   69888 system_pods.go:59] 7 kube-system pods found
	I0704 00:32:12.543683   69888 system_pods.go:61] "coredns-7db6d8ff4d-jj68c" [0ade0009-0837-4e1b-9ae6-e31e0970c47c] Running
	I0704 00:32:12.543691   69888 system_pods.go:61] "etcd-auto-676605" [e44d33e8-b27a-400f-a78e-f658c08165c7] Running
	I0704 00:32:12.543696   69888 system_pods.go:61] "kube-apiserver-auto-676605" [b6120ea1-2c6a-40d3-9021-1f1956aa1caa] Running
	I0704 00:32:12.543701   69888 system_pods.go:61] "kube-controller-manager-auto-676605" [7e6f5bdb-bae3-4756-a13d-e298d5f5d215] Running
	I0704 00:32:12.543706   69888 system_pods.go:61] "kube-proxy-jtdsq" [d3131056-1b83-44d1-ad16-f2f978b514cf] Running
	I0704 00:32:12.543714   69888 system_pods.go:61] "kube-scheduler-auto-676605" [60859d2f-a9e4-4ae2-a9d5-d1beab713420] Running
	I0704 00:32:12.543725   69888 system_pods.go:61] "storage-provisioner" [9eac9c35-b0a3-4945-812f-f88fcde47544] Running
	I0704 00:32:12.543733   69888 system_pods.go:74] duration metric: took 169.666881ms to wait for pod list to return data ...
	I0704 00:32:12.543749   69888 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:32:11.359360   70815 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0704 00:32:11.359671   70815 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-676605 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0704 00:32:11.519788   70815 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0704 00:32:11.520153   70815 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-676605 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0704 00:32:11.652412   70815 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0704 00:32:11.963866   70815 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0704 00:32:12.067404   70815 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0704 00:32:12.067521   70815 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:32:12.279844   70815 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:32:12.452514   70815 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:32:12.515378   70815 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:32:12.727065   70815 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:32:12.892391   70815 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:32:12.893105   70815 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:32:12.895649   70815 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:32:12.741667   69888 default_sa.go:45] found service account: "default"
	I0704 00:32:12.741698   69888 default_sa.go:55] duration metric: took 197.940713ms for default service account to be created ...
	I0704 00:32:12.741709   69888 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:32:12.948525   69888 system_pods.go:86] 7 kube-system pods found
	I0704 00:32:12.948561   69888 system_pods.go:89] "coredns-7db6d8ff4d-jj68c" [0ade0009-0837-4e1b-9ae6-e31e0970c47c] Running
	I0704 00:32:12.948570   69888 system_pods.go:89] "etcd-auto-676605" [e44d33e8-b27a-400f-a78e-f658c08165c7] Running
	I0704 00:32:12.948577   69888 system_pods.go:89] "kube-apiserver-auto-676605" [b6120ea1-2c6a-40d3-9021-1f1956aa1caa] Running
	I0704 00:32:12.948584   69888 system_pods.go:89] "kube-controller-manager-auto-676605" [7e6f5bdb-bae3-4756-a13d-e298d5f5d215] Running
	I0704 00:32:12.948591   69888 system_pods.go:89] "kube-proxy-jtdsq" [d3131056-1b83-44d1-ad16-f2f978b514cf] Running
	I0704 00:32:12.948598   69888 system_pods.go:89] "kube-scheduler-auto-676605" [60859d2f-a9e4-4ae2-a9d5-d1beab713420] Running
	I0704 00:32:12.948604   69888 system_pods.go:89] "storage-provisioner" [9eac9c35-b0a3-4945-812f-f88fcde47544] Running
	I0704 00:32:12.948612   69888 system_pods.go:126] duration metric: took 206.895911ms to wait for k8s-apps to be running ...
	I0704 00:32:12.948620   69888 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:32:12.948672   69888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:32:12.971789   69888 system_svc.go:56] duration metric: took 23.157344ms WaitForService to wait for kubelet
	I0704 00:32:12.971826   69888 kubeadm.go:576] duration metric: took 41.90939632s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:32:12.971851   69888 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:32:13.142453   69888 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:32:13.142488   69888 node_conditions.go:123] node cpu capacity is 2
	I0704 00:32:13.142505   69888 node_conditions.go:105] duration metric: took 170.648123ms to run NodePressure ...
	I0704 00:32:13.142520   69888 start.go:240] waiting for startup goroutines ...
	I0704 00:32:13.142530   69888 start.go:245] waiting for cluster config update ...
	I0704 00:32:13.142544   69888 start.go:254] writing updated cluster config ...
	I0704 00:32:13.142858   69888 ssh_runner.go:195] Run: rm -f paused
	I0704 00:32:13.206209   69888 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:32:13.208184   69888 out.go:177] * Done! kubectl is now configured to use "auto-676605" cluster and "default" namespace by default
	I0704 00:32:12.898702   70815 out.go:204]   - Booting up control plane ...
	I0704 00:32:12.898826   70815 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:32:12.898956   70815 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:32:12.899063   70815 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:32:12.916737   70815 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:32:12.917791   70815 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:32:12.917864   70815 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:32:13.065225   70815 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:32:13.065339   70815 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:32:14.066500   70815 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001510431s
	I0704 00:32:14.066619   70815 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:32:14.377114   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:14.377715   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:14.377747   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:14.377666   71455 retry.go:31] will retry after 2.567794924s: waiting for machine to come up
	I0704 00:32:16.947623   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:16.948090   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:16.948119   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:16.948039   71455 retry.go:31] will retry after 4.410210325s: waiting for machine to come up
	I0704 00:32:19.569593   70815 kubeadm.go:309] [api-check] The API server is healthy after 5.503398546s
	I0704 00:32:19.585338   70815 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:32:19.603683   70815 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:32:19.639441   70815 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:32:19.639637   70815 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-676605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:32:19.653839   70815 kubeadm.go:309] [bootstrap-token] Using token: glvl6h.q59i1uqqfrqcr9ni
	I0704 00:32:19.655807   70815 out.go:204]   - Configuring RBAC rules ...
	I0704 00:32:19.656001   70815 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:32:19.667620   70815 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:32:19.685150   70815 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:32:19.690101   70815 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:32:19.695014   70815 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:32:19.699776   70815 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:32:19.976360   70815 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:32:20.408316   70815 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:32:20.978005   70815 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:32:20.978027   70815 kubeadm.go:309] 
	I0704 00:32:20.978148   70815 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:32:20.978169   70815 kubeadm.go:309] 
	I0704 00:32:20.978264   70815 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:32:20.978279   70815 kubeadm.go:309] 
	I0704 00:32:20.978344   70815 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:32:20.978448   70815 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:32:20.978525   70815 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:32:20.978535   70815 kubeadm.go:309] 
	I0704 00:32:20.978612   70815 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:32:20.978624   70815 kubeadm.go:309] 
	I0704 00:32:20.978692   70815 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:32:20.978702   70815 kubeadm.go:309] 
	I0704 00:32:20.978749   70815 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:32:20.978818   70815 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:32:20.978909   70815 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:32:20.978919   70815 kubeadm.go:309] 
	I0704 00:32:20.979033   70815 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:32:20.979141   70815 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:32:20.979158   70815 kubeadm.go:309] 
	I0704 00:32:20.979269   70815 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token glvl6h.q59i1uqqfrqcr9ni \
	I0704 00:32:20.979390   70815 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:32:20.979428   70815 kubeadm.go:309] 	--control-plane 
	I0704 00:32:20.979437   70815 kubeadm.go:309] 
	I0704 00:32:20.979545   70815 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:32:20.979555   70815 kubeadm.go:309] 
	I0704 00:32:20.979678   70815 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token glvl6h.q59i1uqqfrqcr9ni \
	I0704 00:32:20.979821   70815 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:32:20.980017   70815 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:32:20.980036   70815 cni.go:84] Creating CNI manager for "kindnet"
	I0704 00:32:20.981893   70815 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0704 00:32:20.983292   70815 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0704 00:32:20.990101   70815 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0704 00:32:20.990125   70815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0704 00:32:21.010876   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0704 00:32:21.361118   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:21.361689   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find current IP address of domain calico-676605 in network mk-calico-676605
	I0704 00:32:21.361715   71333 main.go:141] libmachine: (calico-676605) DBG | I0704 00:32:21.361644   71455 retry.go:31] will retry after 4.615983573s: waiting for machine to come up
	I0704 00:32:21.401958   70815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:32:21.402061   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:21.402102   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-676605 minikube.k8s.io/updated_at=2024_07_04T00_32_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=kindnet-676605 minikube.k8s.io/primary=true
	I0704 00:32:21.626619   70815 ops.go:34] apiserver oom_adj: -16
	I0704 00:32:21.626623   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:22.127467   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:22.627441   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:23.127357   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:23.626918   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:24.127314   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:24.627494   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:25.127557   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:25.627021   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:25.980926   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:25.981435   71333 main.go:141] libmachine: (calico-676605) Found IP for machine: 192.168.72.62
	I0704 00:32:25.981461   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has current primary IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:25.981470   71333 main.go:141] libmachine: (calico-676605) Reserving static IP address...
	I0704 00:32:25.981837   71333 main.go:141] libmachine: (calico-676605) DBG | unable to find host DHCP lease matching {name: "calico-676605", mac: "52:54:00:3d:98:4d", ip: "192.168.72.62"} in network mk-calico-676605
	I0704 00:32:26.073990   71333 main.go:141] libmachine: (calico-676605) DBG | Getting to WaitForSSH function...
	I0704 00:32:26.074020   71333 main.go:141] libmachine: (calico-676605) Reserved static IP address: 192.168.72.62
	I0704 00:32:26.074138   71333 main.go:141] libmachine: (calico-676605) Waiting for SSH to be available...
	I0704 00:32:26.077282   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.077925   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.077956   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.078191   71333 main.go:141] libmachine: (calico-676605) DBG | Using SSH client type: external
	I0704 00:32:26.078212   71333 main.go:141] libmachine: (calico-676605) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa (-rw-------)
	I0704 00:32:26.078237   71333 main.go:141] libmachine: (calico-676605) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:32:26.078254   71333 main.go:141] libmachine: (calico-676605) DBG | About to run SSH command:
	I0704 00:32:26.078266   71333 main.go:141] libmachine: (calico-676605) DBG | exit 0
	I0704 00:32:26.216627   71333 main.go:141] libmachine: (calico-676605) DBG | SSH cmd err, output: <nil>: 
	I0704 00:32:26.216988   71333 main.go:141] libmachine: (calico-676605) KVM machine creation complete!
	I0704 00:32:26.217320   71333 main.go:141] libmachine: (calico-676605) Calling .GetConfigRaw
	I0704 00:32:26.217879   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:26.218096   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:26.218315   71333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:32:26.218333   71333 main.go:141] libmachine: (calico-676605) Calling .GetState
	I0704 00:32:26.219766   71333 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:32:26.219783   71333 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:32:26.219792   71333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:32:26.219801   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:26.222705   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.223148   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.223176   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.223379   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:26.223585   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.223761   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.223928   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:26.224116   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:26.224349   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:26.224363   71333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:32:26.335869   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:32:26.335910   71333 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:32:26.335921   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:26.339126   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.339729   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.339758   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.340084   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:26.340382   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.340585   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.340733   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:26.340968   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:26.341177   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:26.341195   71333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:32:26.457810   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:32:26.457906   71333 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:32:26.457917   71333 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:32:26.457927   71333 main.go:141] libmachine: (calico-676605) Calling .GetMachineName
	I0704 00:32:26.458210   71333 buildroot.go:166] provisioning hostname "calico-676605"
	I0704 00:32:26.458233   71333 main.go:141] libmachine: (calico-676605) Calling .GetMachineName
	I0704 00:32:26.458490   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:26.461649   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.462013   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.462053   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.462197   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:26.462380   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.462630   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.462781   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:26.462957   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:26.463214   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:26.463231   71333 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-676605 && echo "calico-676605" | sudo tee /etc/hostname
	I0704 00:32:26.588747   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-676605
	
	I0704 00:32:26.588775   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:26.591855   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.592286   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.592316   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.592652   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:26.592859   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.593027   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:26.593250   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:26.593420   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:26.593603   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:26.593621   71333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-676605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-676605/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-676605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:32:26.713862   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:32:26.713907   71333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:32:26.713931   71333 buildroot.go:174] setting up certificates
	I0704 00:32:26.713945   71333 provision.go:84] configureAuth start
	I0704 00:32:26.713959   71333 main.go:141] libmachine: (calico-676605) Calling .GetMachineName
	I0704 00:32:26.714317   71333 main.go:141] libmachine: (calico-676605) Calling .GetIP
	I0704 00:32:26.717603   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.717972   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.718002   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.718166   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:26.721173   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.721581   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:26.721610   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:26.721798   71333 provision.go:143] copyHostCerts
	I0704 00:32:26.721860   71333 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:32:26.721872   71333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:32:26.721938   71333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:32:26.722096   71333 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:32:26.722109   71333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:32:26.722151   71333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:32:26.722237   71333 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:32:26.722246   71333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:32:26.722274   71333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:32:26.722360   71333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.calico-676605 san=[127.0.0.1 192.168.72.62 calico-676605 localhost minikube]
	I0704 00:32:27.020392   71333 provision.go:177] copyRemoteCerts
	I0704 00:32:27.020484   71333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:32:27.020514   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.023417   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.023743   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.023776   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.023966   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.024201   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.024361   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.024517   71333 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa Username:docker}
	I0704 00:32:27.107928   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:32:27.137057   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0704 00:32:27.176824   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:32:27.207164   71333 provision.go:87] duration metric: took 493.205429ms to configureAuth
	I0704 00:32:27.207202   71333 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:32:27.207431   71333 config.go:182] Loaded profile config "calico-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:32:27.207554   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.210716   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.211096   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.211120   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.211383   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.211629   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.211852   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.212078   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.212298   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:27.212518   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:27.212541   71333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:32:27.513268   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:32:27.513294   71333 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:32:27.513304   71333 main.go:141] libmachine: (calico-676605) Calling .GetURL
	I0704 00:32:27.514648   71333 main.go:141] libmachine: (calico-676605) DBG | Using libvirt version 6000000
	I0704 00:32:27.517288   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.517702   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.517732   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.517961   71333 main.go:141] libmachine: Docker is up and running!
	I0704 00:32:27.517990   71333 main.go:141] libmachine: Reticulating splines...
	I0704 00:32:27.517998   71333 client.go:171] duration metric: took 25.707848899s to LocalClient.Create
	I0704 00:32:27.518029   71333 start.go:167] duration metric: took 25.707928405s to libmachine.API.Create "calico-676605"
	I0704 00:32:27.518042   71333 start.go:293] postStartSetup for "calico-676605" (driver="kvm2")
	I0704 00:32:27.518054   71333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:32:27.518078   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:27.518435   71333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:32:27.518471   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.521101   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.521514   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.521560   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.521738   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.521948   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.522130   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.522300   71333 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa Username:docker}
	I0704 00:32:27.610233   71333 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:32:27.615313   71333 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:32:27.615341   71333 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:32:27.615398   71333 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:32:27.615521   71333 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:32:27.615659   71333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:32:27.627192   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:32:27.656182   71333 start.go:296] duration metric: took 138.126392ms for postStartSetup
	I0704 00:32:27.656244   71333 main.go:141] libmachine: (calico-676605) Calling .GetConfigRaw
	I0704 00:32:27.656908   71333 main.go:141] libmachine: (calico-676605) Calling .GetIP
	I0704 00:32:27.659600   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.660009   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.660035   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.660346   71333 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/config.json ...
	I0704 00:32:27.660555   71333 start.go:128] duration metric: took 25.874444388s to createHost
	I0704 00:32:27.660577   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.662959   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.663315   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.663347   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.663516   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.663704   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.663865   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.664014   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.664165   71333 main.go:141] libmachine: Using SSH client type: native
	I0704 00:32:27.664365   71333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0704 00:32:27.664386   71333 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:32:27.777491   71333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720053147.723488284
	
	I0704 00:32:27.777516   71333 fix.go:216] guest clock: 1720053147.723488284
	I0704 00:32:27.777525   71333 fix.go:229] Guest: 2024-07-04 00:32:27.723488284 +0000 UTC Remote: 2024-07-04 00:32:27.660567146 +0000 UTC m=+45.063874581 (delta=62.921138ms)
	I0704 00:32:27.777574   71333 fix.go:200] guest clock delta is within tolerance: 62.921138ms
	I0704 00:32:27.777581   71333 start.go:83] releasing machines lock for "calico-676605", held for 25.991863874s
	I0704 00:32:27.777606   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:27.777882   71333 main.go:141] libmachine: (calico-676605) Calling .GetIP
	I0704 00:32:27.780857   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.781252   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.781286   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.781532   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:27.782153   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:27.782411   71333 main.go:141] libmachine: (calico-676605) Calling .DriverName
	I0704 00:32:27.782493   71333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:32:27.782543   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.782648   71333 ssh_runner.go:195] Run: cat /version.json
	I0704 00:32:27.782667   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHHostname
	I0704 00:32:27.785689   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.785921   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.786093   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.786121   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.786336   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.786536   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.786573   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:27.786600   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:27.786702   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.786760   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHPort
	I0704 00:32:27.786864   71333 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa Username:docker}
	I0704 00:32:27.787271   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHKeyPath
	I0704 00:32:27.787426   71333 main.go:141] libmachine: (calico-676605) Calling .GetSSHUsername
	I0704 00:32:27.787575   71333 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/calico-676605/id_rsa Username:docker}
	I0704 00:32:27.892618   71333 ssh_runner.go:195] Run: systemctl --version
	I0704 00:32:27.899464   71333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:32:28.070892   71333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:32:28.077394   71333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:32:28.077485   71333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:32:28.097803   71333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:32:28.097839   71333 start.go:494] detecting cgroup driver to use...
	I0704 00:32:28.097907   71333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:32:28.116566   71333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:32:28.133302   71333 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:32:28.133367   71333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:32:28.150295   71333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:32:28.167143   71333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:32:28.297453   71333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:32:28.479107   71333 docker.go:233] disabling docker service ...
	I0704 00:32:28.479187   71333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:32:28.495699   71333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:32:28.509915   71333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:32:28.651869   71333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:32:28.788887   71333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:32:28.805739   71333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:32:28.828035   71333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:32:28.828095   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.841802   71333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:32:28.841881   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.855647   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.868490   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.880616   71333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:32:28.894303   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.906053   71333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.926130   71333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:32:28.940339   71333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:32:28.952519   71333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:32:28.952587   71333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:32:28.968732   71333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:32:28.981020   71333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:32:29.101936   71333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:32:29.266523   71333 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:32:29.266601   71333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:32:29.272088   71333 start.go:562] Will wait 60s for crictl version
	I0704 00:32:29.272145   71333 ssh_runner.go:195] Run: which crictl
	I0704 00:32:29.276214   71333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:32:29.326248   71333 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:32:29.326333   71333 ssh_runner.go:195] Run: crio --version
	I0704 00:32:29.360511   71333 ssh_runner.go:195] Run: crio --version
	I0704 00:32:29.401336   71333 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:32:26.126748   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:26.627555   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:27.127567   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:27.627061   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:28.127417   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:28.627394   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:29.127532   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:29.626908   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:30.127481   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:30.626698   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:29.402941   71333 main.go:141] libmachine: (calico-676605) Calling .GetIP
	I0704 00:32:29.405661   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:29.406080   71333 main.go:141] libmachine: (calico-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:98:4d", ip: ""} in network mk-calico-676605: {Iface:virbr4 ExpiryTime:2024-07-04 01:32:17 +0000 UTC Type:0 Mac:52:54:00:3d:98:4d Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:calico-676605 Clientid:01:52:54:00:3d:98:4d}
	I0704 00:32:29.406108   71333 main.go:141] libmachine: (calico-676605) DBG | domain calico-676605 has defined IP address 192.168.72.62 and MAC address 52:54:00:3d:98:4d in network mk-calico-676605
	I0704 00:32:29.406360   71333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:32:29.411347   71333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:32:29.426604   71333 kubeadm.go:877] updating cluster {Name:calico-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:calico-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:32:29.426706   71333 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:32:29.426748   71333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:32:29.461348   71333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:32:29.461437   71333 ssh_runner.go:195] Run: which lz4
	I0704 00:32:29.466014   71333 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:32:29.471094   71333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:32:29.471136   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:32:31.123990   71333 crio.go:462] duration metric: took 1.658027646s to copy over tarball
	I0704 00:32:31.124072   71333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:32:31.127600   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:31.627285   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:32.127218   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:32.627419   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:33.127263   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:33.628031   70815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:32:33.768310   70815 kubeadm.go:1107] duration metric: took 12.366348182s to wait for elevateKubeSystemPrivileges
	W0704 00:32:33.768354   70815 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:32:33.768363   70815 kubeadm.go:393] duration metric: took 24.574259402s to StartCluster
	I0704 00:32:33.768386   70815 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:33.768458   70815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:32:33.770131   70815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:33.770453   70815 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:32:33.770774   70815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0704 00:32:33.770938   70815 config.go:182] Loaded profile config "kindnet-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:32:33.770904   70815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:32:33.770962   70815 addons.go:69] Setting storage-provisioner=true in profile "kindnet-676605"
	I0704 00:32:33.770972   70815 addons.go:69] Setting default-storageclass=true in profile "kindnet-676605"
	I0704 00:32:33.771005   70815 addons.go:234] Setting addon storage-provisioner=true in "kindnet-676605"
	I0704 00:32:33.771005   70815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-676605"
	I0704 00:32:33.771036   70815 host.go:66] Checking if "kindnet-676605" exists ...
	I0704 00:32:33.771469   70815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:33.771488   70815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:33.771520   70815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:33.771542   70815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:33.772494   70815 out.go:177] * Verifying Kubernetes components...
	I0704 00:32:33.773865   70815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:32:33.794071   70815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0704 00:32:33.794756   70815 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:33.795364   70815 main.go:141] libmachine: Using API Version  1
	I0704 00:32:33.795385   70815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:33.795732   70815 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:33.795953   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetState
	I0704 00:32:33.798521   70815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0704 00:32:33.799047   70815 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:33.799640   70815 main.go:141] libmachine: Using API Version  1
	I0704 00:32:33.799663   70815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:33.800045   70815 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:33.800776   70815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:33.800816   70815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:33.803669   70815 addons.go:234] Setting addon default-storageclass=true in "kindnet-676605"
	I0704 00:32:33.803718   70815 host.go:66] Checking if "kindnet-676605" exists ...
	I0704 00:32:33.804870   70815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:33.804917   70815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:33.823520   70815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I0704 00:32:33.824043   70815 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:33.825377   70815 main.go:141] libmachine: Using API Version  1
	I0704 00:32:33.825395   70815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:33.825820   70815 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:33.826181   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetState
	I0704 00:32:33.828662   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:33.830987   70815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:32:33.833769   70815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:32:33.833797   70815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:32:33.833823   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:33.838101   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:33.838628   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:33.838656   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:33.838862   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:33.839013   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:33.839143   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:33.839279   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:33.841461   70815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0704 00:32:33.842011   70815 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:33.842614   70815 main.go:141] libmachine: Using API Version  1
	I0704 00:32:33.842636   70815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:33.843045   70815 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:33.843691   70815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:32:33.843736   70815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:32:33.863064   70815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0704 00:32:33.863600   70815 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:32:33.864343   70815 main.go:141] libmachine: Using API Version  1
	I0704 00:32:33.864370   70815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:32:33.864755   70815 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:32:33.864988   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetState
	I0704 00:32:33.867128   70815 main.go:141] libmachine: (kindnet-676605) Calling .DriverName
	I0704 00:32:33.867451   70815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:32:33.867475   70815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:32:33.867496   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHHostname
	I0704 00:32:33.871415   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:33.872042   70815 main.go:141] libmachine: (kindnet-676605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:c7:f6", ip: ""} in network mk-kindnet-676605: {Iface:virbr1 ExpiryTime:2024-07-04 01:31:51 +0000 UTC Type:0 Mac:52:54:00:b0:c7:f6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:kindnet-676605 Clientid:01:52:54:00:b0:c7:f6}
	I0704 00:32:33.872070   70815 main.go:141] libmachine: (kindnet-676605) DBG | domain kindnet-676605 has defined IP address 192.168.39.227 and MAC address 52:54:00:b0:c7:f6 in network mk-kindnet-676605
	I0704 00:32:33.872342   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHPort
	I0704 00:32:33.872942   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHKeyPath
	I0704 00:32:33.873171   70815 main.go:141] libmachine: (kindnet-676605) Calling .GetSSHUsername
	I0704 00:32:33.873372   70815 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/kindnet-676605/id_rsa Username:docker}
	I0704 00:32:34.119007   70815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0704 00:32:34.153716   70815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:32:34.387483   70815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:32:34.429974   70815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:32:34.621494   70815 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0704 00:32:34.622890   70815 node_ready.go:35] waiting up to 15m0s for node "kindnet-676605" to be "Ready" ...
	I0704 00:32:36.988014   70815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-676605" context rescaled to 1 replicas
	I0704 00:32:36.997204   70815 node_ready.go:53] node "kindnet-676605" has status "Ready":"False"
	I0704 00:32:37.145112   70815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.757590716s)
	I0704 00:32:37.145152   70815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715146016s)
	I0704 00:32:37.145176   70815 main.go:141] libmachine: Making call to close driver server
	I0704 00:32:37.145192   70815 main.go:141] libmachine: (kindnet-676605) Calling .Close
	I0704 00:32:37.145194   70815 main.go:141] libmachine: Making call to close driver server
	I0704 00:32:37.145212   70815 main.go:141] libmachine: (kindnet-676605) Calling .Close
	I0704 00:32:37.145555   70815 main.go:141] libmachine: (kindnet-676605) DBG | Closing plugin on server side
	I0704 00:32:37.145555   70815 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:32:37.145576   70815 main.go:141] libmachine: (kindnet-676605) DBG | Closing plugin on server side
	I0704 00:32:37.145579   70815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:32:37.145588   70815 main.go:141] libmachine: Making call to close driver server
	I0704 00:32:37.145596   70815 main.go:141] libmachine: (kindnet-676605) Calling .Close
	I0704 00:32:37.145598   70815 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:32:37.145606   70815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:32:37.145618   70815 main.go:141] libmachine: Making call to close driver server
	I0704 00:32:37.145627   70815 main.go:141] libmachine: (kindnet-676605) Calling .Close
	I0704 00:32:37.145794   70815 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:32:37.145804   70815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:32:37.147045   70815 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:32:37.147062   70815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:32:37.159549   70815 main.go:141] libmachine: Making call to close driver server
	I0704 00:32:37.159586   70815 main.go:141] libmachine: (kindnet-676605) Calling .Close
	I0704 00:32:37.159905   70815 main.go:141] libmachine: (kindnet-676605) DBG | Closing plugin on server side
	I0704 00:32:37.159944   70815 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:32:37.159951   70815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:32:37.161537   70815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0704 00:32:34.158893   71333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034789927s)
	I0704 00:32:34.158921   71333 crio.go:469] duration metric: took 3.03489976s to extract the tarball
	I0704 00:32:34.158930   71333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:32:34.207696   71333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:32:34.269109   71333 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:32:34.269136   71333 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:32:34.269146   71333 kubeadm.go:928] updating node { 192.168.72.62 8443 v1.30.2 crio true true} ...
	I0704 00:32:34.269271   71333 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-676605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:calico-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0704 00:32:34.269347   71333 ssh_runner.go:195] Run: crio config
	I0704 00:32:34.342831   71333 cni.go:84] Creating CNI manager for "calico"
	I0704 00:32:34.342859   71333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:32:34.342891   71333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-676605 NodeName:calico-676605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:32:34.343077   71333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-676605"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:32:34.343156   71333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:32:34.356644   71333 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:32:34.356727   71333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:32:34.373625   71333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0704 00:32:34.405287   71333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:32:34.429549   71333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0704 00:32:34.458126   71333 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0704 00:32:34.463828   71333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:32:34.479929   71333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:32:34.640113   71333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:32:34.660412   71333 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605 for IP: 192.168.72.62
	I0704 00:32:34.660442   71333 certs.go:194] generating shared ca certs ...
	I0704 00:32:34.660462   71333 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:34.660644   71333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:32:34.660713   71333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:32:34.660728   71333 certs.go:256] generating profile certs ...
	I0704 00:32:34.660796   71333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.key
	I0704 00:32:34.660817   71333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.crt with IP's: []
	I0704 00:32:34.890971   71333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.crt ...
	I0704 00:32:34.891007   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.crt: {Name:mk4484af135337aa15eddfa03d46a2bc3e00c222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:34.995395   71333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.key ...
	I0704 00:32:34.995473   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/client.key: {Name:mkc5d860136153107b65f7f0128065dca650c182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:34.995689   71333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key.bd83ebc8
	I0704 00:32:34.995715   71333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt.bd83ebc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.62]
	I0704 00:32:35.204602   71333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt.bd83ebc8 ...
	I0704 00:32:35.204631   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt.bd83ebc8: {Name:mk93b367ec745881e527268233eda1eea9a1a7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:35.268677   71333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key.bd83ebc8 ...
	I0704 00:32:35.268711   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key.bd83ebc8: {Name:mkd28b32dac485a2abeb00324643757063948751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:35.268848   71333 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt.bd83ebc8 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt
	I0704 00:32:35.268946   71333 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key.bd83ebc8 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key
	I0704 00:32:35.269003   71333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.key
	I0704 00:32:35.269020   71333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.crt with IP's: []
	I0704 00:32:35.357728   71333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.crt ...
	I0704 00:32:35.357761   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.crt: {Name:mk6e064a3d445152923530d46ac977f3d2260ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:35.357947   71333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.key ...
	I0704 00:32:35.357965   71333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.key: {Name:mk87b664d426b218cf89f39d8e5bc070483d92be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:32:35.358209   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:32:35.358251   71333 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:32:35.358261   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:32:35.358286   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:32:35.358309   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:32:35.358330   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:32:35.358366   71333 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:32:35.358992   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:32:35.407733   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:32:35.449536   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:32:35.492086   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:32:35.522263   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0704 00:32:35.550607   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:32:35.579223   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:32:35.607867   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/calico-676605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:32:35.648920   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:32:35.680842   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:32:35.714518   71333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:32:35.746066   71333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:32:35.768118   71333 ssh_runner.go:195] Run: openssl version
	I0704 00:32:35.775478   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:32:35.789172   71333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:35.796348   71333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:35.796420   71333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:32:35.805483   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:32:35.822983   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:32:35.836030   71333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:32:35.841787   71333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:32:35.841896   71333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:32:35.848907   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:32:35.863686   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:32:35.877306   71333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:32:35.883021   71333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:32:35.883078   71333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:32:35.889981   71333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:32:35.903895   71333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:32:35.909471   71333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:32:35.909546   71333 kubeadm.go:391] StartCluster: {Name:calico-676605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:calico-676605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:32:35.909630   71333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:32:35.909706   71333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:32:35.955895   71333 cri.go:89] found id: ""
	I0704 00:32:35.955979   71333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:32:35.968011   71333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:32:35.979708   71333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:32:35.992650   71333 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:32:35.992671   71333 kubeadm.go:156] found existing configuration files:
	
	I0704 00:32:35.992725   71333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:32:36.003841   71333 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:32:36.003933   71333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:32:36.016869   71333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:32:36.030226   71333 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:32:36.030306   71333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:32:36.042286   71333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:32:36.054637   71333 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:32:36.054707   71333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:32:36.066669   71333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:32:36.081883   71333 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:32:36.081949   71333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:32:36.095160   71333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:32:36.358859   71333 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.330896205Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b335daca-ead5-42d5-96a3-245d38bd2d1a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051881145720459,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:11:13.250071018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmq4s,Uid:f9725f92-7635-4111-bf63-66dbef0155b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172005
1881142719643,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04T00:11:13.250072527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:337b7ab9195774a213d82a06c320f8a973866c1e5672285f4319b7b4fe8f5987,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-v8qw2,Uid:d6a67fb7-5004-4c93-9023-fc470f786ae9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051879336731359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-v8qw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a67fb7-5004-4c93-9023-fc470f786ae9,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-04
T00:11:13.250060317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3adc3ff6-282f-4f53-879f-c73d71c76b74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051873571365042,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-04T00:11:13.250068866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&PodSandboxMetadata{Name:kube-proxy-pplqq,Uid:3b74a8c2-1e91-449d-9be9-8891459dccbc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051873569436568,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9-8891459dccbc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-07-04T00:11:13.250073887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-995404,Uid:eccd3511daaf18b1d48cae4d95632212,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868782718855,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48cae4d95632212,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eccd3511daaf18b1d48cae4d95632212,kubernetes.io/config.seen: 2024-07-04T00:11:08.255483074Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-995404,Uid:
0d1f278de836ff491a91e8c80936294a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868773795779,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c80936294a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.164:8444,kubernetes.io/config.hash: 0d1f278de836ff491a91e8c80936294a,kubernetes.io/config.seen: 2024-07-04T00:11:08.255476949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-995404,Uid:f74b1039c6d802b380d3b54865ba5da9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868766053063,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.164:2379,kubernetes.io/config.hash: f74b1039c6d802b380d3b54865ba5da9,kubernetes.io/config.seen: 2024-07-04T00:11:08.296830252Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-995404,Uid:21f4f5d0c28792012b764ca566c3a613,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720051868760998713,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a613,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 21f4f5d0c28792012b764ca566c3a613,kubernetes.io/config.seen: 2024-07-04T00:11:08.255481905Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ba1e9b98-62ac-4b53-aff7-f9476326ab92 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.332012310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21dd7636-b962-43c9-bb63-2b202a3f9019 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.332153443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21dd7636-b962-43c9-bb63-2b202a3f9019 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.332407537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1
e91-449d-9be9-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511
daaf18b1d48cae4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de83
6ff491a91e8c80936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b
764ca566c3a613,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21dd7636-b962-43c9-bb63-2b202a3f9019 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.339074304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a79f75b-dd2d-4155-aa96-46a4dfc8457c name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.339724377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a79f75b-dd2d-4155-aa96-46a4dfc8457c name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.341415560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9e20624-fcc4-4323-87c3-0d6386f16e60 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.341793008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053160341770977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9e20624-fcc4-4323-87c3-0d6386f16e60 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.342599122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8b0d467-5082-4575-a746-ff38499507cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.342653601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8b0d467-5082-4575-a746-ff38499507cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.342857125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8b0d467-5082-4575-a746-ff38499507cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.399269480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a44e69e0-7ead-4b90-b389-203ee37f7c51 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.399357738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a44e69e0-7ead-4b90-b389-203ee37f7c51 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.403842233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af843759-e9aa-4ee1-88f7-dd4ffe9e5d09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.404343795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053160404317182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af843759-e9aa-4ee1-88f7-dd4ffe9e5d09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.405577326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53e07acc-26c7-40b0-839e-3fc0558c08a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.405660925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53e07acc-26c7-40b0-839e-3fc0558c08a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.406153472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53e07acc-26c7-40b0-839e-3fc0558c08a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.450540594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37a3bec4-c746-4c51-9a09-c0e0eb79f3ea name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.450701580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37a3bec4-c746-4c51-9a09-c0e0eb79f3ea name=/runtime.v1.RuntimeService/Version
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.453050337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5e0f4ea-8fa3-420c-bdbd-9d1568684d83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.454202454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053160454062881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5e0f4ea-8fa3-420c-bdbd-9d1568684d83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.455962241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17a4205c-bb9e-4fc2-b121-b2b75d97e535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.456053522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17a4205c-bb9e-4fc2-b121-b2b75d97e535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:32:40 default-k8s-diff-port-995404 crio[731]: time="2024-07-04 00:32:40.456408057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720051904563717392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d37d3ca0beb9fed8acfd33e6b37d6cf0e5febdf274ca3821f3fce785f41e74b,PodSandboxId:b5aa2a4c02f49e31cfbcb984b3ebc151daa586fd95a1fc6960aa3edae5aea428,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720051884382068388,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b335daca-ead5-42d5-96a3-245d38bd2d1a,},Annotations:map[string]string{io.kubernetes.container.hash: f2e926c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a,PodSandboxId:92c3127464abdb66dbe78c45730fef3a6abd84ef59f3692ad1b8f6ba20def236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720051881474647205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmq4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9725f92-7635-4111-bf63-66dbef0155b2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b9298ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2,PodSandboxId:8a08aafe4e1f0db59f7c040ebb0518bd9d0742348c2f42c2819369733436eeab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720051873798818647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3adc3ff6-282f-4f53-879f-c73d71c76b74,},Annotations:map[string]string{io.kubernetes.container.hash: 6198682d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d,PodSandboxId:53881affa9536be73514c9358585bfd7874283cf7c29c1cf7485180ea6e7f15c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720051873712935930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pplqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b74a8c2-1e91-449d-9be9
-8891459dccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 975ff7d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8,PodSandboxId:81234ffa8223556d0f25bb5ad0a6f4e8e9778d838408b74b738c6b3018ecf93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720051869083460209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd3511daaf18b1d48ca
e4d95632212,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658,PodSandboxId:69ef20a0f5c69f72498d52dd5d07bbf205d42ea936f853ca20682d4ebcfbeb41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720051869078377436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1f278de836ff491a91e8c8
0936294a,},Annotations:map[string]string{io.kubernetes.container.hash: c3c43c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9,PodSandboxId:38443a29b6ba0e18c77feff5fffee3aa53c03f3b01fd1fe4147ea71583a8e520,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720051868974216877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74b1039c6d802b380d3b54865ba5da9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 73cfec4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e,PodSandboxId:3f9060ae8aaf63933efd5048005576b9117160e64e8bb78c96d688667dca8060,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720051868999971961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-995404,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f4f5d0c28792012b764ca566c3a61
3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17a4205c-bb9e-4fc2-b121-b2b75d97e535 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	916f2ecfce3c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   8a08aafe4e1f0       storage-provisioner
	4d37d3ca0beb9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   b5aa2a4c02f49       busybox
	7dc19c0e5a3a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   92c3127464abd       coredns-7db6d8ff4d-jmq4s
	ee9747ce58de5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   8a08aafe4e1f0       storage-provisioner
	54ecbdc0a4753       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      21 minutes ago      Running             kube-proxy                1                   53881affa9536       kube-proxy-pplqq
	06f36aa92a09f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      21 minutes ago      Running             kube-scheduler            1                   81234ffa82235       kube-scheduler-default-k8s-diff-port-995404
	f69caa2d9d0a4       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      21 minutes ago      Running             kube-apiserver            1                   69ef20a0f5c69       kube-apiserver-default-k8s-diff-port-995404
	13a8615c20433       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      21 minutes ago      Running             kube-controller-manager   1                   3f9060ae8aaf6       kube-controller-manager-default-k8s-diff-port-995404
	5629c8085daeb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   38443a29b6ba0       etcd-default-k8s-diff-port-995404
	
	
	==> coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46260 - 34624 "HINFO IN 4350776552710244963.6388471656172094076. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010614342s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-995404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-995404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=default-k8s-diff-port-995404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_03_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-995404
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:32:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:32:09 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:32:09 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:32:09 +0000   Thu, 04 Jul 2024 00:03:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:32:09 +0000   Thu, 04 Jul 2024 00:11:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    default-k8s-diff-port-995404
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f65ad4585c17430b8e254e05e9233a59
	  System UUID:                f65ad458-5c17-430b-8e25-4e05e9233a59
	  Boot ID:                    ce7ef7a0-7835-4022-9e53-76168d47dc81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-jmq4s                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-995404                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-995404             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-995404    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-pplqq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-995404             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-v8qw2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-995404 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-995404 event: Registered Node default-k8s-diff-port-995404 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-995404 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-995404 event: Registered Node default-k8s-diff-port-995404 in Controller
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053998] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.011315] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.511832] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.629618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.197135] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059204] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051096] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.222028] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.130959] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[Jul 4 00:11] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.845430] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.073494] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.227420] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +5.610717] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.973525] systemd-fstab-generator[1553]: Ignoring "noauto" option for root device
	[  +3.752827] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.088814] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] <==
	{"level":"info","ts":"2024-07-04T00:31:10.919633Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1369}
	{"level":"info","ts":"2024-07-04T00:31:10.92365Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1369,"took":"3.728611ms","hash":313851938,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-04T00:31:10.92372Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":313851938,"revision":1369,"compact-revision":1124}
	{"level":"warn","ts":"2024-07-04T00:31:28.816469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.881511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-04T00:31:28.8166Z","caller":"traceutil/trace.go:171","msg":"trace[937669707] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1626; }","duration":"125.063225ms","start":"2024-07-04T00:31:28.691524Z","end":"2024-07-04T00:31:28.816587Z","steps":["trace[937669707] 'count revisions from in-memory index tree'  (duration: 124.822565ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:31:28.817011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.391024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-04T00:31:28.817431Z","caller":"traceutil/trace.go:171","msg":"trace[786639377] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1626; }","duration":"198.835655ms","start":"2024-07-04T00:31:28.618582Z","end":"2024-07-04T00:31:28.817417Z","steps":["trace[786639377] 'count revisions from in-memory index tree'  (duration: 198.200889ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:31:29.332904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.554094ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14310065199294916209 > lease_revoke:<id:4697907b14a72622>","response":"size:29"}
	{"level":"info","ts":"2024-07-04T00:31:29.333005Z","caller":"traceutil/trace.go:171","msg":"trace[181137301] linearizableReadLoop","detail":"{readStateIndex:1918; appliedIndex:1917; }","duration":"125.230312ms","start":"2024-07-04T00:31:29.207761Z","end":"2024-07-04T00:31:29.332991Z","steps":["trace[181137301] 'read index received'  (duration: 32.005µs)","trace[181137301] 'applied index is now lower than readState.Index'  (duration: 125.197258ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:31:29.333376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.705794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:31:29.333513Z","caller":"traceutil/trace.go:171","msg":"trace[300271959] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1626; }","duration":"101.865539ms","start":"2024-07-04T00:31:29.231634Z","end":"2024-07-04T00:31:29.3335Z","steps":["trace[300271959] 'agreement among raft nodes before linearized reading'  (duration: 101.70674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:31:29.333384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.637067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-07-04T00:31:29.33401Z","caller":"traceutil/trace.go:171","msg":"trace[124089808] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1626; }","duration":"126.28589ms","start":"2024-07-04T00:31:29.207709Z","end":"2024-07-04T00:31:29.333994Z","steps":["trace[124089808] 'agreement among raft nodes before linearized reading'  (duration: 125.570501ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:31:29.609943Z","caller":"traceutil/trace.go:171","msg":"trace[1047564115] transaction","detail":"{read_only:false; response_revision:1627; number_of_response:1; }","duration":"270.976761ms","start":"2024-07-04T00:31:29.338668Z","end":"2024-07-04T00:31:29.609645Z","steps":["trace[1047564115] 'process raft request'  (duration: 270.816788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:32:09.334951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.920453ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14310065199294916405 > lease_revoke:<id:4697907b14a726e6>","response":"size:29"}
	{"level":"info","ts":"2024-07-04T00:32:09.335784Z","caller":"traceutil/trace.go:171","msg":"trace[582078414] linearizableReadLoop","detail":"{readStateIndex:1958; appliedIndex:1956; }","duration":"104.311016ms","start":"2024-07-04T00:32:09.231441Z","end":"2024-07-04T00:32:09.335752Z","steps":["trace[582078414] 'read index received'  (duration: 104.012475ms)","trace[582078414] 'applied index is now lower than readState.Index'  (duration: 297.944µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-04T00:32:09.336226Z","caller":"traceutil/trace.go:171","msg":"trace[1775255260] transaction","detail":"{read_only:false; response_revision:1658; number_of_response:1; }","duration":"200.527907ms","start":"2024-07-04T00:32:09.135679Z","end":"2024-07-04T00:32:09.336206Z","steps":["trace[1775255260] 'process raft request'  (duration: 199.960677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:32:09.336306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.839366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:32:09.336782Z","caller":"traceutil/trace.go:171","msg":"trace[1563090421] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1658; }","duration":"105.355367ms","start":"2024-07-04T00:32:09.231408Z","end":"2024-07-04T00:32:09.336764Z","steps":["trace[1563090421] 'agreement among raft nodes before linearized reading'  (duration: 104.844823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:32:09.595678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.157193ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14310065199294916407 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" mod_revision:1650 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-04T00:32:09.595791Z","caller":"traceutil/trace.go:171","msg":"trace[1869438299] linearizableReadLoop","detail":"{readStateIndex:1959; appliedIndex:1958; }","duration":"255.062267ms","start":"2024-07-04T00:32:09.340717Z","end":"2024-07-04T00:32:09.595779Z","steps":["trace[1869438299] 'read index received'  (duration: 114.623844ms)","trace[1869438299] 'applied index is now lower than readState.Index'  (duration: 140.437519ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:32:09.59586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.139233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:32:09.595901Z","caller":"traceutil/trace.go:171","msg":"trace[576187423] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1659; }","duration":"255.181516ms","start":"2024-07-04T00:32:09.340713Z","end":"2024-07-04T00:32:09.595894Z","steps":["trace[576187423] 'agreement among raft nodes before linearized reading'  (duration: 255.099339ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:32:09.596259Z","caller":"traceutil/trace.go:171","msg":"trace[47574607] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"310.704157ms","start":"2024-07-04T00:32:09.28553Z","end":"2024-07-04T00:32:09.596235Z","steps":["trace[47574607] 'process raft request'  (duration: 169.875924ms)","trace[47574607] 'compare'  (duration: 140.034211ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:32:09.597379Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:32:09.285514Z","time spent":"311.800316ms","remote":"127.0.0.1:36476","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" mod_revision:1650 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vu6v6li25bazo75yitliztv6ra\" > >"}
	
	
	==> kernel <==
	 00:32:40 up 21 min,  0 users,  load average: 0.11, 0.12, 0.10
	Linux default-k8s-diff-port-995404 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] <==
	I0704 00:27:13.419040       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:29:13.419060       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:29:13.419214       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:29:13.419229       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:29:13.419608       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:29:13.419699       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:29:13.420399       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:31:12.424712       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:31:12.425179       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:31:13.426143       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:31:13.426254       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:31:13.426282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:31:13.426316       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:31:13.426425       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:31:13.427481       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:32:13.426667       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:32:13.426839       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:32:13.426857       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:32:13.427943       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:32:13.428189       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:32:13.428247       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] <==
	I0704 00:26:56.293063       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:27:25.760896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:26.301367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:27:34.359453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="371.492µs"
	I0704 00:27:45.345992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="208.946µs"
	E0704 00:27:55.766445       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:56.315179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:28:25.772668       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:26.327634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:28:55.777423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:56.341892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:25.782622       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:26.353301       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:55.787870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:56.363988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:30:25.794800       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:30:26.371828       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:30:55.799740       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:30:56.379353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:31:25.807801       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:31:26.394386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:31:55.815130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:31:56.403390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:32:25.820976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:32:26.424307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] <==
	I0704 00:11:13.979690       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:11:13.997406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.164"]
	I0704 00:11:14.065614       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:11:14.065658       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:11:14.065680       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:11:14.077726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:11:14.078165       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:11:14.078381       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:11:14.079715       1 config.go:192] "Starting service config controller"
	I0704 00:11:14.079850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:11:14.079938       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:11:14.080072       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:11:14.080758       1 config.go:319] "Starting node config controller"
	I0704 00:11:14.082560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:11:14.180236       1 shared_informer.go:320] Caches are synced for service config
	I0704 00:11:14.180365       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:11:14.183245       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] <==
	I0704 00:11:10.366932       1 serving.go:380] Generated self-signed cert in-memory
	W0704 00:11:12.397921       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0704 00:11:12.397969       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0704 00:11:12.397983       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0704 00:11:12.397990       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0704 00:11:12.431506       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0704 00:11:12.431554       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:11:12.437030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0704 00:11:12.437214       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0704 00:11:12.437213       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0704 00:11:12.437473       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0704 00:11:12.539500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:30:11 default-k8s-diff-port-995404 kubelet[945]: E0704 00:30:11.331405     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:30:23 default-k8s-diff-port-995404 kubelet[945]: E0704 00:30:23.331206     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:30:34 default-k8s-diff-port-995404 kubelet[945]: E0704 00:30:34.333952     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:30:47 default-k8s-diff-port-995404 kubelet[945]: E0704 00:30:47.331440     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:31:01 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:01.333293     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:31:08 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:08.352287     945 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:31:08 default-k8s-diff-port-995404 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:31:08 default-k8s-diff-port-995404 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:31:08 default-k8s-diff-port-995404 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:31:08 default-k8s-diff-port-995404 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:31:14 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:14.331022     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:31:27 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:27.331903     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:31:40 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:40.336056     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:31:51 default-k8s-diff-port-995404 kubelet[945]: E0704 00:31:51.331920     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:32:06 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:06.332884     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:32:08 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:08.353409     945 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:32:08 default-k8s-diff-port-995404 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:32:08 default-k8s-diff-port-995404 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:32:08 default-k8s-diff-port-995404 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:32:08 default-k8s-diff-port-995404 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:32:17 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:17.331472     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	Jul 04 00:32:29 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:29.346307     945 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:32:29 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:29.346720     945 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 04 00:32:29 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:29.347227     945 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd2s7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-v8qw2_kube-system(d6a67fb7-5004-4c93-9023-fc470f786ae9): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 04 00:32:29 default-k8s-diff-port-995404 kubelet[945]: E0704 00:32:29.347609     945 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-v8qw2" podUID="d6a67fb7-5004-4c93-9023-fc470f786ae9"
	
	
	==> storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] <==
	I0704 00:11:44.671386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:11:44.689427       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:11:44.689523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:12:02.101222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:12:02.101613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0!
	I0704 00:12:02.101700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b744336-4986-4a58-8c08-ba78b534b80d", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0 became leader
	I0704 00:12:02.202653       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-995404_735c64ce-9626-410a-8e95-4f9e2636bed0!
	
	
	==> storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] <==
	I0704 00:11:13.970856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0704 00:11:43.973553       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-v8qw2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2: exit status 1 (105.024992ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-v8qw2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-995404 describe pod metrics-server-569cc877fc-v8qw2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (273.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-317739 -n no-preload-317739
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-04 00:30:34.639910935 +0000 UTC m=+6227.573148697
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-317739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-317739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.855µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-317739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-317739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-317739 logs -n 25: (1.354508863s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:29 UTC | 04 Jul 24 00:29 UTC |
	| start   | -p newest-cni-791847 --memory=2200 --alsologtostderr   | newest-cni-791847            | jenkins | v1.33.1 | 04 Jul 24 00:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:29:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:29:51.104597   69348 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:29:51.104729   69348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:29:51.104740   69348 out.go:304] Setting ErrFile to fd 2...
	I0704 00:29:51.104744   69348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:29:51.104930   69348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:29:51.105515   69348 out.go:298] Setting JSON to false
	I0704 00:29:51.106455   69348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7931,"bootTime":1720045060,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:29:51.106525   69348 start.go:139] virtualization: kvm guest
	I0704 00:29:51.109125   69348 out.go:177] * [newest-cni-791847] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:29:51.110625   69348 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:29:51.110668   69348 notify.go:220] Checking for updates...
	I0704 00:29:51.113195   69348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:29:51.114574   69348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:29:51.115857   69348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:29:51.117545   69348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:29:51.119034   69348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:29:51.120666   69348 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:29:51.120781   69348 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:29:51.120882   69348 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:29:51.120978   69348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:29:51.159921   69348 out.go:177] * Using the kvm2 driver based on user configuration
	I0704 00:29:51.161235   69348 start.go:297] selected driver: kvm2
	I0704 00:29:51.161252   69348 start.go:901] validating driver "kvm2" against <nil>
	I0704 00:29:51.161277   69348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:29:51.161952   69348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:29:51.162054   69348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:29:51.177506   69348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:29:51.177576   69348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0704 00:29:51.177606   69348 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0704 00:29:51.177816   69348 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0704 00:29:51.177849   69348 cni.go:84] Creating CNI manager for ""
	I0704 00:29:51.177858   69348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:29:51.177865   69348 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0704 00:29:51.177910   69348 start.go:340] cluster config:
	{Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:29:51.178004   69348 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:29:51.180119   69348 out.go:177] * Starting "newest-cni-791847" primary control-plane node in "newest-cni-791847" cluster
	I0704 00:29:51.181212   69348 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:29:51.181251   69348 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:29:51.181262   69348 cache.go:56] Caching tarball of preloaded images
	I0704 00:29:51.181358   69348 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:29:51.181370   69348 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:29:51.181462   69348 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/config.json ...
	I0704 00:29:51.181478   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/config.json: {Name:mk757654430b53a8d88d7d08c89d88dcf3650b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:29:51.181607   69348 start.go:360] acquireMachinesLock for newest-cni-791847: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:29:51.181634   69348 start.go:364] duration metric: took 15.259µs to acquireMachinesLock for "newest-cni-791847"
	I0704 00:29:51.181649   69348 start.go:93] Provisioning new machine with config: &{Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:29:51.181710   69348 start.go:125] createHost starting for "" (driver="kvm2")
	I0704 00:29:51.183373   69348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0704 00:29:51.183587   69348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:29:51.183637   69348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:29:51.198980   69348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0704 00:29:51.199408   69348 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:29:51.200000   69348 main.go:141] libmachine: Using API Version  1
	I0704 00:29:51.200019   69348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:29:51.200345   69348 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:29:51.200565   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:29:51.200732   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:29:51.200900   69348 start.go:159] libmachine.API.Create for "newest-cni-791847" (driver="kvm2")
	I0704 00:29:51.200929   69348 client.go:168] LocalClient.Create starting
	I0704 00:29:51.200980   69348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem
	I0704 00:29:51.201019   69348 main.go:141] libmachine: Decoding PEM data...
	I0704 00:29:51.201040   69348 main.go:141] libmachine: Parsing certificate...
	I0704 00:29:51.201104   69348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem
	I0704 00:29:51.201135   69348 main.go:141] libmachine: Decoding PEM data...
	I0704 00:29:51.201153   69348 main.go:141] libmachine: Parsing certificate...
	I0704 00:29:51.201183   69348 main.go:141] libmachine: Running pre-create checks...
	I0704 00:29:51.201196   69348 main.go:141] libmachine: (newest-cni-791847) Calling .PreCreateCheck
	I0704 00:29:51.201546   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetConfigRaw
	I0704 00:29:51.201974   69348 main.go:141] libmachine: Creating machine...
	I0704 00:29:51.201991   69348 main.go:141] libmachine: (newest-cni-791847) Calling .Create
	I0704 00:29:51.202118   69348 main.go:141] libmachine: (newest-cni-791847) Creating KVM machine...
	I0704 00:29:51.203907   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found existing default KVM network
	I0704 00:29:51.205149   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.204992   69371 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d0:cd:48} reservation:<nil>}
	I0704 00:29:51.206169   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.206053   69371 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e0:97:b1} reservation:<nil>}
	I0704 00:29:51.206910   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.206835   69371 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:0e:79} reservation:<nil>}
	I0704 00:29:51.208053   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.207967   69371 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002898a0}
	I0704 00:29:51.208089   69348 main.go:141] libmachine: (newest-cni-791847) DBG | created network xml: 
	I0704 00:29:51.208105   69348 main.go:141] libmachine: (newest-cni-791847) DBG | <network>
	I0704 00:29:51.208119   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   <name>mk-newest-cni-791847</name>
	I0704 00:29:51.208133   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   <dns enable='no'/>
	I0704 00:29:51.208150   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   
	I0704 00:29:51.208167   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0704 00:29:51.208194   69348 main.go:141] libmachine: (newest-cni-791847) DBG |     <dhcp>
	I0704 00:29:51.208210   69348 main.go:141] libmachine: (newest-cni-791847) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0704 00:29:51.208221   69348 main.go:141] libmachine: (newest-cni-791847) DBG |     </dhcp>
	I0704 00:29:51.208235   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   </ip>
	I0704 00:29:51.208242   69348 main.go:141] libmachine: (newest-cni-791847) DBG |   
	I0704 00:29:51.208252   69348 main.go:141] libmachine: (newest-cni-791847) DBG | </network>
	I0704 00:29:51.208264   69348 main.go:141] libmachine: (newest-cni-791847) DBG | 
	I0704 00:29:51.213976   69348 main.go:141] libmachine: (newest-cni-791847) DBG | trying to create private KVM network mk-newest-cni-791847 192.168.72.0/24...
	I0704 00:29:51.291163   69348 main.go:141] libmachine: (newest-cni-791847) DBG | private KVM network mk-newest-cni-791847 192.168.72.0/24 created
	I0704 00:29:51.291194   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.291127   69371 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:29:51.291210   69348 main.go:141] libmachine: (newest-cni-791847) Setting up store path in /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847 ...
	I0704 00:29:51.291227   69348 main.go:141] libmachine: (newest-cni-791847) Building disk image from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0704 00:29:51.291393   69348 main.go:141] libmachine: (newest-cni-791847) Downloading /home/jenkins/minikube-integration/18998-9396/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso...
	I0704 00:29:51.523661   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.523543   69371 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa...
	I0704 00:29:51.702676   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.702536   69371 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/newest-cni-791847.rawdisk...
	I0704 00:29:51.702713   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Writing magic tar header
	I0704 00:29:51.702751   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Writing SSH key tar header
	I0704 00:29:51.702763   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:51.702721   69371 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847 ...
	I0704 00:29:51.702901   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847
	I0704 00:29:51.702958   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847 (perms=drwx------)
	I0704 00:29:51.702977   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube/machines
	I0704 00:29:51.703011   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:29:51.703044   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18998-9396
	I0704 00:29:51.703058   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube/machines (perms=drwxr-xr-x)
	I0704 00:29:51.703076   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396/.minikube (perms=drwxr-xr-x)
	I0704 00:29:51.703091   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins/minikube-integration/18998-9396 (perms=drwxrwxr-x)
	I0704 00:29:51.703120   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0704 00:29:51.703156   69348 main.go:141] libmachine: (newest-cni-791847) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0704 00:29:51.703176   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0704 00:29:51.703188   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home/jenkins
	I0704 00:29:51.703200   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Checking permissions on dir: /home
	I0704 00:29:51.703216   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Skipping /home - not owner
	I0704 00:29:51.703251   69348 main.go:141] libmachine: (newest-cni-791847) Creating domain...
	I0704 00:29:51.704192   69348 main.go:141] libmachine: (newest-cni-791847) define libvirt domain using xml: 
	I0704 00:29:51.704212   69348 main.go:141] libmachine: (newest-cni-791847) <domain type='kvm'>
	I0704 00:29:51.704242   69348 main.go:141] libmachine: (newest-cni-791847)   <name>newest-cni-791847</name>
	I0704 00:29:51.704284   69348 main.go:141] libmachine: (newest-cni-791847)   <memory unit='MiB'>2200</memory>
	I0704 00:29:51.704308   69348 main.go:141] libmachine: (newest-cni-791847)   <vcpu>2</vcpu>
	I0704 00:29:51.704340   69348 main.go:141] libmachine: (newest-cni-791847)   <features>
	I0704 00:29:51.704360   69348 main.go:141] libmachine: (newest-cni-791847)     <acpi/>
	I0704 00:29:51.704371   69348 main.go:141] libmachine: (newest-cni-791847)     <apic/>
	I0704 00:29:51.704401   69348 main.go:141] libmachine: (newest-cni-791847)     <pae/>
	I0704 00:29:51.704424   69348 main.go:141] libmachine: (newest-cni-791847)     
	I0704 00:29:51.704439   69348 main.go:141] libmachine: (newest-cni-791847)   </features>
	I0704 00:29:51.704460   69348 main.go:141] libmachine: (newest-cni-791847)   <cpu mode='host-passthrough'>
	I0704 00:29:51.704474   69348 main.go:141] libmachine: (newest-cni-791847)   
	I0704 00:29:51.704487   69348 main.go:141] libmachine: (newest-cni-791847)   </cpu>
	I0704 00:29:51.704500   69348 main.go:141] libmachine: (newest-cni-791847)   <os>
	I0704 00:29:51.704513   69348 main.go:141] libmachine: (newest-cni-791847)     <type>hvm</type>
	I0704 00:29:51.704528   69348 main.go:141] libmachine: (newest-cni-791847)     <boot dev='cdrom'/>
	I0704 00:29:51.704540   69348 main.go:141] libmachine: (newest-cni-791847)     <boot dev='hd'/>
	I0704 00:29:51.704555   69348 main.go:141] libmachine: (newest-cni-791847)     <bootmenu enable='no'/>
	I0704 00:29:51.704571   69348 main.go:141] libmachine: (newest-cni-791847)   </os>
	I0704 00:29:51.704585   69348 main.go:141] libmachine: (newest-cni-791847)   <devices>
	I0704 00:29:51.704596   69348 main.go:141] libmachine: (newest-cni-791847)     <disk type='file' device='cdrom'>
	I0704 00:29:51.704633   69348 main.go:141] libmachine: (newest-cni-791847)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/boot2docker.iso'/>
	I0704 00:29:51.704655   69348 main.go:141] libmachine: (newest-cni-791847)       <target dev='hdc' bus='scsi'/>
	I0704 00:29:51.704669   69348 main.go:141] libmachine: (newest-cni-791847)       <readonly/>
	I0704 00:29:51.704681   69348 main.go:141] libmachine: (newest-cni-791847)     </disk>
	I0704 00:29:51.704695   69348 main.go:141] libmachine: (newest-cni-791847)     <disk type='file' device='disk'>
	I0704 00:29:51.704722   69348 main.go:141] libmachine: (newest-cni-791847)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0704 00:29:51.704739   69348 main.go:141] libmachine: (newest-cni-791847)       <source file='/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/newest-cni-791847.rawdisk'/>
	I0704 00:29:51.704754   69348 main.go:141] libmachine: (newest-cni-791847)       <target dev='hda' bus='virtio'/>
	I0704 00:29:51.704767   69348 main.go:141] libmachine: (newest-cni-791847)     </disk>
	I0704 00:29:51.704778   69348 main.go:141] libmachine: (newest-cni-791847)     <interface type='network'>
	I0704 00:29:51.704797   69348 main.go:141] libmachine: (newest-cni-791847)       <source network='mk-newest-cni-791847'/>
	I0704 00:29:51.704818   69348 main.go:141] libmachine: (newest-cni-791847)       <model type='virtio'/>
	I0704 00:29:51.704847   69348 main.go:141] libmachine: (newest-cni-791847)     </interface>
	I0704 00:29:51.704861   69348 main.go:141] libmachine: (newest-cni-791847)     <interface type='network'>
	I0704 00:29:51.704872   69348 main.go:141] libmachine: (newest-cni-791847)       <source network='default'/>
	I0704 00:29:51.704882   69348 main.go:141] libmachine: (newest-cni-791847)       <model type='virtio'/>
	I0704 00:29:51.704892   69348 main.go:141] libmachine: (newest-cni-791847)     </interface>
	I0704 00:29:51.704903   69348 main.go:141] libmachine: (newest-cni-791847)     <serial type='pty'>
	I0704 00:29:51.704917   69348 main.go:141] libmachine: (newest-cni-791847)       <target port='0'/>
	I0704 00:29:51.704928   69348 main.go:141] libmachine: (newest-cni-791847)     </serial>
	I0704 00:29:51.704933   69348 main.go:141] libmachine: (newest-cni-791847)     <console type='pty'>
	I0704 00:29:51.704941   69348 main.go:141] libmachine: (newest-cni-791847)       <target type='serial' port='0'/>
	I0704 00:29:51.704945   69348 main.go:141] libmachine: (newest-cni-791847)     </console>
	I0704 00:29:51.704957   69348 main.go:141] libmachine: (newest-cni-791847)     <rng model='virtio'>
	I0704 00:29:51.704970   69348 main.go:141] libmachine: (newest-cni-791847)       <backend model='random'>/dev/random</backend>
	I0704 00:29:51.704978   69348 main.go:141] libmachine: (newest-cni-791847)     </rng>
	I0704 00:29:51.704992   69348 main.go:141] libmachine: (newest-cni-791847)     
	I0704 00:29:51.705002   69348 main.go:141] libmachine: (newest-cni-791847)     
	I0704 00:29:51.705010   69348 main.go:141] libmachine: (newest-cni-791847)   </devices>
	I0704 00:29:51.705017   69348 main.go:141] libmachine: (newest-cni-791847) </domain>
	I0704 00:29:51.705024   69348 main.go:141] libmachine: (newest-cni-791847) 
	I0704 00:29:51.709542   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:08:38:8a in network default
	I0704 00:29:51.710290   69348 main.go:141] libmachine: (newest-cni-791847) Ensuring networks are active...
	I0704 00:29:51.710314   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:51.711056   69348 main.go:141] libmachine: (newest-cni-791847) Ensuring network default is active
	I0704 00:29:51.711417   69348 main.go:141] libmachine: (newest-cni-791847) Ensuring network mk-newest-cni-791847 is active
	I0704 00:29:51.711978   69348 main.go:141] libmachine: (newest-cni-791847) Getting domain xml...
	I0704 00:29:51.712778   69348 main.go:141] libmachine: (newest-cni-791847) Creating domain...
	I0704 00:29:52.994401   69348 main.go:141] libmachine: (newest-cni-791847) Waiting to get IP...
	I0704 00:29:52.995155   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:52.995589   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:52.995613   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:52.995569   69371 retry.go:31] will retry after 190.851537ms: waiting for machine to come up
	I0704 00:29:53.188296   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:53.188733   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:53.188755   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:53.188690   69371 retry.go:31] will retry after 341.663703ms: waiting for machine to come up
	I0704 00:29:53.532373   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:53.532881   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:53.532911   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:53.532837   69371 retry.go:31] will retry after 450.900426ms: waiting for machine to come up
	I0704 00:29:53.985459   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:53.985893   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:53.985932   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:53.985851   69371 retry.go:31] will retry after 606.721444ms: waiting for machine to come up
	I0704 00:29:54.594844   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:54.595374   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:54.595404   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:54.595322   69371 retry.go:31] will retry after 659.82972ms: waiting for machine to come up
	I0704 00:29:55.256855   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:55.257380   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:55.257420   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:55.257324   69371 retry.go:31] will retry after 879.641954ms: waiting for machine to come up
	I0704 00:29:56.138282   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:56.138799   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:56.138821   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:56.138757   69371 retry.go:31] will retry after 1.074138602s: waiting for machine to come up
	I0704 00:29:57.214419   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:57.215040   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:57.215072   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:57.214969   69371 retry.go:31] will retry after 1.354742792s: waiting for machine to come up
	I0704 00:29:58.570969   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:58.571436   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:58.571466   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:58.571398   69371 retry.go:31] will retry after 1.168998138s: waiting for machine to come up
	I0704 00:29:59.742050   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:29:59.742523   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:29:59.742544   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:29:59.742460   69371 retry.go:31] will retry after 2.193234119s: waiting for machine to come up
	I0704 00:30:01.937199   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:01.937675   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:30:01.937725   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:30:01.937636   69371 retry.go:31] will retry after 2.300008157s: waiting for machine to come up
	I0704 00:30:04.238875   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:04.239353   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:30:04.239381   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:30:04.239303   69371 retry.go:31] will retry after 2.617265688s: waiting for machine to come up
	I0704 00:30:06.858399   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:06.858912   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:30:06.859039   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:30:06.858968   69371 retry.go:31] will retry after 3.021297072s: waiting for machine to come up
	I0704 00:30:09.882309   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:09.882886   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find current IP address of domain newest-cni-791847 in network mk-newest-cni-791847
	I0704 00:30:09.882915   69348 main.go:141] libmachine: (newest-cni-791847) DBG | I0704 00:30:09.882822   69371 retry.go:31] will retry after 4.708330846s: waiting for machine to come up
	I0704 00:30:14.593115   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.593685   69348 main.go:141] libmachine: (newest-cni-791847) Found IP for machine: 192.168.72.71
	I0704 00:30:14.593704   69348 main.go:141] libmachine: (newest-cni-791847) Reserving static IP address...
	I0704 00:30:14.593714   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has current primary IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.594244   69348 main.go:141] libmachine: (newest-cni-791847) DBG | unable to find host DHCP lease matching {name: "newest-cni-791847", mac: "52:54:00:85:d7:95", ip: "192.168.72.71"} in network mk-newest-cni-791847
	I0704 00:30:14.684391   69348 main.go:141] libmachine: (newest-cni-791847) Reserved static IP address: 192.168.72.71
	I0704 00:30:14.684558   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Getting to WaitForSSH function...
	I0704 00:30:14.684574   69348 main.go:141] libmachine: (newest-cni-791847) Waiting for SSH to be available...
	I0704 00:30:14.687835   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.688234   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:14.688269   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.688417   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Using SSH client type: external
	I0704 00:30:14.688442   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa (-rw-------)
	I0704 00:30:14.688493   69348 main.go:141] libmachine: (newest-cni-791847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:30:14.688507   69348 main.go:141] libmachine: (newest-cni-791847) DBG | About to run SSH command:
	I0704 00:30:14.688529   69348 main.go:141] libmachine: (newest-cni-791847) DBG | exit 0
	I0704 00:30:14.812711   69348 main.go:141] libmachine: (newest-cni-791847) DBG | SSH cmd err, output: <nil>: 
	I0704 00:30:14.813063   69348 main.go:141] libmachine: (newest-cni-791847) KVM machine creation complete!
	I0704 00:30:14.813397   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetConfigRaw
	I0704 00:30:14.813932   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:14.814143   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:14.814333   69348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0704 00:30:14.814345   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetState
	I0704 00:30:14.815680   69348 main.go:141] libmachine: Detecting operating system of created instance...
	I0704 00:30:14.815700   69348 main.go:141] libmachine: Waiting for SSH to be available...
	I0704 00:30:14.815705   69348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0704 00:30:14.815711   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:14.818434   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.818837   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:14.818864   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.819094   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:14.819295   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:14.819461   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:14.819630   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:14.819809   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:14.820068   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:14.820083   69348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0704 00:30:14.927836   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:30:14.927861   69348 main.go:141] libmachine: Detecting the provisioner...
	I0704 00:30:14.927869   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:14.931071   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.931540   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:14.931594   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:14.931807   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:14.932066   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:14.932243   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:14.932457   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:14.932640   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:14.932818   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:14.932832   69348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0704 00:30:15.041201   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0704 00:30:15.041295   69348 main.go:141] libmachine: found compatible host: buildroot
	I0704 00:30:15.041306   69348 main.go:141] libmachine: Provisioning with buildroot...
	I0704 00:30:15.041313   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:30:15.041585   69348 buildroot.go:166] provisioning hostname "newest-cni-791847"
	I0704 00:30:15.041608   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:30:15.041809   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:15.045345   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.045822   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.045849   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.046022   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:15.046241   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.046488   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.046653   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:15.046906   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:15.047073   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:15.047085   69348 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-791847 && echo "newest-cni-791847" | sudo tee /etc/hostname
	I0704 00:30:15.165867   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-791847
	
	I0704 00:30:15.165893   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:15.169554   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.170008   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.170056   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.170259   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:15.170424   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.170557   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.170694   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:15.170881   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:15.171089   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:15.171115   69348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-791847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-791847/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-791847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:30:15.286780   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:30:15.286810   69348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:30:15.286860   69348 buildroot.go:174] setting up certificates
	I0704 00:30:15.286878   69348 provision.go:84] configureAuth start
	I0704 00:30:15.286906   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetMachineName
	I0704 00:30:15.287255   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:30:15.290535   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.290927   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.290971   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.291242   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:15.293882   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.294339   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.294373   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.294583   69348 provision.go:143] copyHostCerts
	I0704 00:30:15.294628   69348 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:30:15.294636   69348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:30:15.294713   69348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:30:15.294809   69348 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:30:15.294817   69348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:30:15.294841   69348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:30:15.294902   69348 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:30:15.294908   69348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:30:15.294928   69348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:30:15.294984   69348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.newest-cni-791847 san=[127.0.0.1 192.168.72.71 localhost minikube newest-cni-791847]
	I0704 00:30:15.546062   69348 provision.go:177] copyRemoteCerts
	I0704 00:30:15.546120   69348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:30:15.546141   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:15.548994   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.549328   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.549360   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.549577   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:15.549807   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.549987   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:15.550177   69348 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:30:15.636219   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:30:15.663242   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:30:15.692494   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:30:15.720439   69348 provision.go:87] duration metric: took 433.542195ms to configureAuth
	I0704 00:30:15.720482   69348 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:30:15.720708   69348 config.go:182] Loaded profile config "newest-cni-791847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:30:15.720828   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:15.723655   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.723994   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:15.724029   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:15.724215   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:15.724446   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.724576   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:15.724719   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:15.724880   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:15.725042   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:15.725056   69348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:30:16.015320   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:30:16.015344   69348 main.go:141] libmachine: Checking connection to Docker...
	I0704 00:30:16.015366   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetURL
	I0704 00:30:16.017029   69348 main.go:141] libmachine: (newest-cni-791847) DBG | Using libvirt version 6000000
	I0704 00:30:16.019611   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.020043   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.020075   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.020370   69348 main.go:141] libmachine: Docker is up and running!
	I0704 00:30:16.020391   69348 main.go:141] libmachine: Reticulating splines...
	I0704 00:30:16.020399   69348 client.go:171] duration metric: took 24.81946236s to LocalClient.Create
	I0704 00:30:16.020420   69348 start.go:167] duration metric: took 24.819520841s to libmachine.API.Create "newest-cni-791847"
	I0704 00:30:16.020427   69348 start.go:293] postStartSetup for "newest-cni-791847" (driver="kvm2")
	I0704 00:30:16.020439   69348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:30:16.020459   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:16.020661   69348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:30:16.020684   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:16.023119   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.023463   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.023489   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.023702   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:16.023939   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:16.024142   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:16.024372   69348 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:30:16.108538   69348 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:30:16.113889   69348 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:30:16.113919   69348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:30:16.113991   69348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:30:16.114129   69348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:30:16.114264   69348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:30:16.125386   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:30:16.153190   69348 start.go:296] duration metric: took 132.746984ms for postStartSetup
	I0704 00:30:16.153244   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetConfigRaw
	I0704 00:30:16.153948   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:30:16.156441   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.156759   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.156792   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.157040   69348 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/config.json ...
	I0704 00:30:16.157279   69348 start.go:128] duration metric: took 24.975559193s to createHost
	I0704 00:30:16.157306   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:16.159996   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.160337   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.160368   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.160492   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:16.160695   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:16.160857   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:16.161036   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:16.161227   69348 main.go:141] libmachine: Using SSH client type: native
	I0704 00:30:16.161441   69348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.71 22 <nil> <nil>}
	I0704 00:30:16.161452   69348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:30:16.273112   69348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720053016.250498803
	
	I0704 00:30:16.273137   69348 fix.go:216] guest clock: 1720053016.250498803
	I0704 00:30:16.273147   69348 fix.go:229] Guest: 2024-07-04 00:30:16.250498803 +0000 UTC Remote: 2024-07-04 00:30:16.15729147 +0000 UTC m=+25.088302307 (delta=93.207333ms)
	I0704 00:30:16.273204   69348 fix.go:200] guest clock delta is within tolerance: 93.207333ms
	I0704 00:30:16.273215   69348 start.go:83] releasing machines lock for "newest-cni-791847", held for 25.091571642s
	I0704 00:30:16.273243   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:16.273559   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:30:16.276701   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.277167   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.277226   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.277392   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:16.277939   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:16.278182   69348 main.go:141] libmachine: (newest-cni-791847) Calling .DriverName
	I0704 00:30:16.278245   69348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:30:16.278313   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:16.278447   69348 ssh_runner.go:195] Run: cat /version.json
	I0704 00:30:16.278472   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHHostname
	I0704 00:30:16.281231   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.281258   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.281624   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.281661   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:16.281684   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.281718   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:16.281943   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:16.282159   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHPort
	I0704 00:30:16.282194   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:16.282379   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHKeyPath
	I0704 00:30:16.282438   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:16.282594   69348 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:30:16.282608   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetSSHUsername
	I0704 00:30:16.282742   69348 sshutil.go:53] new ssh client: &{IP:192.168.72.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/newest-cni-791847/id_rsa Username:docker}
	I0704 00:30:16.362048   69348 ssh_runner.go:195] Run: systemctl --version
	I0704 00:30:16.399405   69348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:30:16.562909   69348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:30:16.570103   69348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:30:16.570172   69348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:30:16.589021   69348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:30:16.589050   69348 start.go:494] detecting cgroup driver to use...
	I0704 00:30:16.589129   69348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:30:16.608633   69348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:30:16.625301   69348 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:30:16.625351   69348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:30:16.641856   69348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:30:16.657846   69348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:30:16.780485   69348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:30:16.923108   69348 docker.go:233] disabling docker service ...
	I0704 00:30:16.923171   69348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:30:16.938560   69348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:30:16.953139   69348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:30:17.092299   69348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:30:17.213933   69348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:30:17.228063   69348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:30:17.249431   69348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:30:17.249481   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.263827   69348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:30:17.263920   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.276632   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.289066   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.300564   69348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:30:17.313509   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.327122   69348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.347800   69348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:30:17.359738   69348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:30:17.370407   69348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:30:17.370484   69348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:30:17.386153   69348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:30:17.397472   69348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:30:17.525348   69348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:30:17.675371   69348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:30:17.675453   69348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:30:17.681158   69348 start.go:562] Will wait 60s for crictl version
	I0704 00:30:17.681529   69348 ssh_runner.go:195] Run: which crictl
	I0704 00:30:17.686176   69348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:30:17.730917   69348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:30:17.731000   69348 ssh_runner.go:195] Run: crio --version
	I0704 00:30:17.762445   69348 ssh_runner.go:195] Run: crio --version
	I0704 00:30:17.801266   69348 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:30:17.802917   69348 main.go:141] libmachine: (newest-cni-791847) Calling .GetIP
	I0704 00:30:17.805843   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:17.806214   69348 main.go:141] libmachine: (newest-cni-791847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:d7:95", ip: ""} in network mk-newest-cni-791847: {Iface:virbr4 ExpiryTime:2024-07-04 01:30:05 +0000 UTC Type:0 Mac:52:54:00:85:d7:95 Iaid: IPaddr:192.168.72.71 Prefix:24 Hostname:newest-cni-791847 Clientid:01:52:54:00:85:d7:95}
	I0704 00:30:17.806243   69348 main.go:141] libmachine: (newest-cni-791847) DBG | domain newest-cni-791847 has defined IP address 192.168.72.71 and MAC address 52:54:00:85:d7:95 in network mk-newest-cni-791847
	I0704 00:30:17.806541   69348 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:30:17.811159   69348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:30:17.826634   69348 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0704 00:30:17.828044   69348 kubeadm.go:877] updating cluster {Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:30:17.828162   69348 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:30:17.828227   69348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:30:17.863394   69348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:30:17.863468   69348 ssh_runner.go:195] Run: which lz4
	I0704 00:30:17.867841   69348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:30:17.872880   69348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:30:17.872929   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:30:19.472004   69348 crio.go:462] duration metric: took 1.604211476s to copy over tarball
	I0704 00:30:19.472073   69348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:30:21.835706   69348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.36360138s)
	I0704 00:30:21.835737   69348 crio.go:469] duration metric: took 2.363708192s to extract the tarball
	I0704 00:30:21.835745   69348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:30:21.878741   69348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:30:21.929887   69348 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:30:21.929924   69348 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:30:21.929934   69348 kubeadm.go:928] updating node { 192.168.72.71 8443 v1.30.2 crio true true} ...
	I0704 00:30:21.930081   69348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-791847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:30:21.930162   69348 ssh_runner.go:195] Run: crio config
	I0704 00:30:21.986502   69348 cni.go:84] Creating CNI manager for ""
	I0704 00:30:21.986528   69348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:30:21.986540   69348 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0704 00:30:21.986588   69348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.71 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-791847 NodeName:newest-cni-791847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:30:21.986766   69348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-791847"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:30:21.986837   69348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:30:21.998280   69348 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:30:21.998358   69348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:30:22.009702   69348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0704 00:30:22.029332   69348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:30:22.048722   69348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0704 00:30:22.067570   69348 ssh_runner.go:195] Run: grep 192.168.72.71	control-plane.minikube.internal$ /etc/hosts
	I0704 00:30:22.071846   69348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:30:22.085960   69348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:30:22.223869   69348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:30:22.252675   69348 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847 for IP: 192.168.72.71
	I0704 00:30:22.252696   69348 certs.go:194] generating shared ca certs ...
	I0704 00:30:22.252712   69348 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.252908   69348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:30:22.252975   69348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:30:22.252989   69348 certs.go:256] generating profile certs ...
	I0704 00:30:22.253046   69348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.key
	I0704 00:30:22.253059   69348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.crt with IP's: []
	I0704 00:30:22.510916   69348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.crt ...
	I0704 00:30:22.510942   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.crt: {Name:mk8a9fc67c98612766c5d1f98d054f289dda3a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.511100   69348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.key ...
	I0704 00:30:22.511110   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/client.key: {Name:mk739d247bbd6d6e13b6d143fc7a131f480abc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.511188   69348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key.eb601083
	I0704 00:30:22.511203   69348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt.eb601083 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.71]
	I0704 00:30:22.582864   69348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt.eb601083 ...
	I0704 00:30:22.582898   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt.eb601083: {Name:mke342238e95a9d8695d559fa0508eab8823c577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.583069   69348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key.eb601083 ...
	I0704 00:30:22.583085   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key.eb601083: {Name:mk58484f02f414135a462fe018cca7135c8af784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.583179   69348 certs.go:381] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt.eb601083 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt
	I0704 00:30:22.583277   69348 certs.go:385] copying /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key.eb601083 -> /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key
	I0704 00:30:22.583361   69348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key
	I0704 00:30:22.583383   69348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.crt with IP's: []
	I0704 00:30:22.816645   69348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.crt ...
	I0704 00:30:22.816679   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.crt: {Name:mk8ec0ab0e02be2ca1042fba2ce189cd3402f25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.816855   69348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key ...
	I0704 00:30:22.816867   69348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key: {Name:mk0095843a52d65f5ef516ebf3044c2c925d0597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:30:22.817050   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:30:22.817088   69348 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:30:22.817103   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:30:22.817125   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:30:22.817145   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:30:22.817165   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:30:22.817200   69348 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:30:22.817836   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:30:22.854225   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:30:22.887754   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:30:22.916562   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:30:22.946823   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:30:22.977536   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:30:23.005691   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:30:23.035290   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/newest-cni-791847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:30:23.066087   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:30:23.108060   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:30:23.138887   69348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:30:23.168494   69348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:30:23.187809   69348 ssh_runner.go:195] Run: openssl version
	I0704 00:30:23.194164   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:30:23.207057   69348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:30:23.212493   69348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:30:23.212564   69348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:30:23.220098   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:30:23.233050   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:30:23.246196   69348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:30:23.251539   69348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:30:23.251618   69348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:30:23.258097   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:30:23.270812   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:30:23.284179   69348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:30:23.289658   69348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:30:23.289733   69348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:30:23.296324   69348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:30:23.309210   69348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:30:23.314029   69348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0704 00:30:23.314103   69348 kubeadm.go:391] StartCluster: {Name:newest-cni-791847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:newest-cni-791847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:30:23.314221   69348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:30:23.314304   69348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:30:23.368861   69348 cri.go:89] found id: ""
	I0704 00:30:23.368928   69348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0704 00:30:23.380160   69348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:30:23.392362   69348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:30:23.404062   69348 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:30:23.404084   69348 kubeadm.go:156] found existing configuration files:
	
	I0704 00:30:23.404136   69348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:30:23.414755   69348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:30:23.414814   69348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:30:23.426382   69348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:30:23.438091   69348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:30:23.438173   69348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:30:23.449047   69348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:30:23.459526   69348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:30:23.459587   69348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:30:23.470667   69348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:30:23.483023   69348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:30:23.483114   69348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:30:23.495858   69348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:30:23.783129   69348 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:30:34.133452   69348 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:30:34.133520   69348 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:30:34.133609   69348 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:30:34.133749   69348 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:30:34.133846   69348 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:30:34.133917   69348 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:30:34.135816   69348 out.go:204]   - Generating certificates and keys ...
	I0704 00:30:34.135949   69348 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:30:34.136053   69348 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:30:34.136170   69348 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0704 00:30:34.136258   69348 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0704 00:30:34.136362   69348 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0704 00:30:34.136438   69348 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0704 00:30:34.136533   69348 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0704 00:30:34.136688   69348 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-791847] and IPs [192.168.72.71 127.0.0.1 ::1]
	I0704 00:30:34.136740   69348 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0704 00:30:34.136848   69348 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-791847] and IPs [192.168.72.71 127.0.0.1 ::1]
	I0704 00:30:34.136909   69348 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0704 00:30:34.136965   69348 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0704 00:30:34.137016   69348 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0704 00:30:34.137083   69348 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:30:34.137150   69348 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:30:34.137242   69348 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:30:34.137319   69348 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:30:34.137425   69348 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:30:34.137516   69348 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:30:34.137617   69348 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:30:34.137694   69348 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:30:34.139327   69348 out.go:204]   - Booting up control plane ...
	I0704 00:30:34.139433   69348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:30:34.139543   69348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:30:34.139625   69348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:30:34.139794   69348 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:30:34.139940   69348 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:30:34.139997   69348 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:30:34.140151   69348 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:30:34.140251   69348 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:30:34.140307   69348 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.062708ms
	I0704 00:30:34.140388   69348 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:30:34.140474   69348 kubeadm.go:309] [api-check] The API server is healthy after 5.502844395s
	I0704 00:30:34.140621   69348 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:30:34.140767   69348 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:30:34.140840   69348 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:30:34.141069   69348 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-791847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:30:34.141147   69348 kubeadm.go:309] [bootstrap-token] Using token: e0wuep.ehqi76demsi6vsj8
	I0704 00:30:34.142686   69348 out.go:204]   - Configuring RBAC rules ...
	I0704 00:30:34.142809   69348 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:30:34.142883   69348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:30:34.143048   69348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:30:34.143203   69348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:30:34.143335   69348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:30:34.143410   69348 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:30:34.143525   69348 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:30:34.143591   69348 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:30:34.143631   69348 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:30:34.143637   69348 kubeadm.go:309] 
	I0704 00:30:34.143720   69348 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:30:34.143739   69348 kubeadm.go:309] 
	I0704 00:30:34.143860   69348 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:30:34.143871   69348 kubeadm.go:309] 
	I0704 00:30:34.143924   69348 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:30:34.144008   69348 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:30:34.144075   69348 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:30:34.144084   69348 kubeadm.go:309] 
	I0704 00:30:34.144154   69348 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:30:34.144164   69348 kubeadm.go:309] 
	I0704 00:30:34.144233   69348 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:30:34.144246   69348 kubeadm.go:309] 
	I0704 00:30:34.144306   69348 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:30:34.144373   69348 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:30:34.144430   69348 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:30:34.144438   69348 kubeadm.go:309] 
	I0704 00:30:34.144510   69348 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:30:34.144592   69348 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:30:34.144603   69348 kubeadm.go:309] 
	I0704 00:30:34.144666   69348 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wuep.ehqi76demsi6vsj8 \
	I0704 00:30:34.144766   69348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:30:34.144799   69348 kubeadm.go:309] 	--control-plane 
	I0704 00:30:34.144803   69348 kubeadm.go:309] 
	I0704 00:30:34.144876   69348 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:30:34.144885   69348 kubeadm.go:309] 
	I0704 00:30:34.144983   69348 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wuep.ehqi76demsi6vsj8 \
	I0704 00:30:34.145088   69348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:30:34.145114   69348 cni.go:84] Creating CNI manager for ""
	I0704 00:30:34.145128   69348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:30:34.146762   69348 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.305056938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053035305027896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e5c93d2-27cb-4e2c-872a-59528f59b778 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.305738756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3752a84-29e9-4b24-ab48-30d2608bb1cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.305811270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3752a84-29e9-4b24-ab48-30d2608bb1cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.306090969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3752a84-29e9-4b24-ab48-30d2608bb1cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.346383891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26198f19-b1fb-472c-8319-1245de342272 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.346474253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26198f19-b1fb-472c-8319-1245de342272 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.347570592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26c1f83c-e2b9-474f-9b00-06c019a97f76 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.347985418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053035347958054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26c1f83c-e2b9-474f-9b00-06c019a97f76 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.348505697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=589d9a8e-ee70-4cc6-b7b1-ae9877221fa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.348577127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=589d9a8e-ee70-4cc6-b7b1-ae9877221fa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.348917462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=589d9a8e-ee70-4cc6-b7b1-ae9877221fa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.395183752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ced960de-6d16-48e4-b143-32940ee72a61 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.395283259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ced960de-6d16-48e4-b143-32940ee72a61 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.396979579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86122c18-f1ef-40a1-88be-eb937f5efb96 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.397549557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053035397430626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86122c18-f1ef-40a1-88be-eb937f5efb96 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.398539309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a3d291e-e8c3-4fe0-8212-af38e815ae58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.398614047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a3d291e-e8c3-4fe0-8212-af38e815ae58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.399164757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a3d291e-e8c3-4fe0-8212-af38e815ae58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.439132690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1384f12-eed3-49c7-9472-b35e1c3b3154 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.439230219Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1384f12-eed3-49c7-9472-b35e1c3b3154 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.440686361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c2fda72-980a-4cfc-acb5-6eeb72e88ad7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.441228946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720053035441199837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c2fda72-980a-4cfc-acb5-6eeb72e88ad7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.441770083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78178274-ec77-43cc-b1f4-0493600e67b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.441827576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78178274-ec77-43cc-b1f4-0493600e67b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:30:35 no-preload-317739 crio[728]: time="2024-07-04 00:30:35.442093163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1,PodSandboxId:f5c17ce5c643ff57ae3fe018cbe9feecb02b3889c4d51b1ff508790ea6fb56d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216105381996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cxq59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d3a64b-8d7d-455c-990c-0e496f8cf461,},Annotations:map[string]string{io.kubernetes.container.hash: 71137ca2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b,PodSandboxId:00f3815cdb9f3e1bcfd9d2c5f4422a136e56a8cfc56f024dd1f6ef01a957fbeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720052216069423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3ab51a52-571c-4533-86d2-7293368ac2ee,},Annotations:map[string]string{io.kubernetes.container.hash: 343089df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d,PodSandboxId:8e763d22e756237dd508186131e8abad0552c2a06c800defaeeebd7554595c02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1720052215483738332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ab9324-5df0-4232-aef4-be29bfc4c082,},Annotations:map[string]string{io.kubernetes.container.hash: a3e4758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0,PodSandboxId:b1e9e2f510050da8e1af01d2081a433bbf4b1b82098620695a241b12ffda4149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1720052214489433899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxfrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1b3ed-9c18-4fae-bf43-5da22cf90f6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb45e4c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035,PodSandboxId:117db7ef072f5cce883883b304f5c4dc6df84e85f44f054155175377acf16091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720052193793394538,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9dbd84be584645d7d7cbf56ca9e1fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9,PodSandboxId:6b88f294de11b6e2cb2a1839426bce06f550a8a57a961b48b7f89d97227cf920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720052193749626581,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87138dc334b907dd15d64d032a857ef7,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352,PodSandboxId:44c290d680031bb3b48a8aa3106904230c559e1832817ae895807a702d185816,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720052193729827689,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc9f6aeac4a8fa13272a598076b772,},Annotations:map[string]string{io.kubernetes.container.hash: b7c2fdd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7,PodSandboxId:1045b0af6be22039b5ece99d8f791a586abefe4b3a4f1285df6a5ef21c13aa79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720052193657751317,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815,PodSandboxId:d8988b536bf328141796c143795604a7b72126c67ae626c4370697661fe75866,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720051901526112497,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-317739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f96666f6162c21b4d226c2216289e21,},Annotations:map[string]string{io.kubernetes.container.hash: 59fbd90a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78178274-ec77-43cc-b1f4-0493600e67b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	371d42757ac36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   f5c17ce5c643f       coredns-7db6d8ff4d-cxq59
	480bfbc294ac7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   00f3815cdb9f3       coredns-7db6d8ff4d-qnrtm
	889c5e0513c8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   8e763d22e7562       storage-provisioner
	2b7e00135c847       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   13 minutes ago      Running             kube-proxy                0                   b1e9e2f510050       kube-proxy-xxfrd
	fa9cbb6b523ab       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   14 minutes ago      Running             kube-scheduler            2                   117db7ef072f5       kube-scheduler-no-preload-317739
	ed67711567408       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   14 minutes ago      Running             kube-controller-manager   2                   6b88f294de11b       kube-controller-manager-no-preload-317739
	c3fcbe487cac0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   44c290d680031       etcd-no-preload-317739
	3b5a2a6c13e28       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   14 minutes ago      Running             kube-apiserver            2                   1045b0af6be22       kube-apiserver-no-preload-317739
	0df936d7e00e9       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   18 minutes ago      Exited              kube-apiserver            1                   d8988b536bf32       kube-apiserver-no-preload-317739
	
	
	==> coredns [371d42757ac36ff0a39a40001774bf32a4115011cd70c197c6f71236f005ede1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [480bfbc294ac72f0f3e25c3a311d5e8a9fe68947e44dd051ddfa2072440b746b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-317739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-317739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=no-preload-317739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Jul 2024 00:16:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-317739
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Jul 2024 00:30:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Jul 2024 00:27:10 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Jul 2024 00:27:10 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Jul 2024 00:27:10 +0000   Thu, 04 Jul 2024 00:16:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Jul 2024 00:27:10 +0000   Thu, 04 Jul 2024 00:16:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.109
	  Hostname:    no-preload-317739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfb1cdef80504e9e81cd486f42ed0de7
	  System UUID:                dfb1cdef-8050-4e9e-81cd-486f42ed0de7
	  Boot ID:                    5289cb08-edbf-4259-8b56-94051faf5bf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cxq59                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-qnrtm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-317739                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-317739             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-317739    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xxfrd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-317739             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-t28ff              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-317739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-317739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-317739 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-317739 event: Registered Node no-preload-317739 in Controller
	
	
	==> dmesg <==
	[  +0.054330] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.830672] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543725] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.426757] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.546439] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.128183] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.201597] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.197764] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.351520] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +16.972251] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.056786] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874245] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.656372] kauditd_printk_skb: 100 callbacks suppressed
	[ +11.272056] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 4 00:16] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.201359] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.715801] systemd-fstab-generator[4015]: Ignoring "noauto" option for root device
	[  +6.561461] systemd-fstab-generator[4343]: Ignoring "noauto" option for root device
	[  +0.088091] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.850119] systemd-fstab-generator[4566]: Ignoring "noauto" option for root device
	[  +0.152562] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 4 00:17] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c3fcbe487cac06135e867a57d1f709ba605d918a3f2fc6847def560544757352] <==
	{"level":"info","ts":"2024-07-04T00:16:34.531608Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cd3870a7a18a1c08","local-member-attributes":"{Name:no-preload-317739 ClientURLs:[https://192.168.61.109:2379]}","request-path":"/0/members/cd3870a7a18a1c08/attributes","cluster-id":"3a5c5ca57b3a339f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-04T00:16:34.531685Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:16:34.532251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-04T00:16:34.533294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a5c5ca57b3a339f","local-member-id":"cd3870a7a18a1c08","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.533459Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.533522Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-04T00:16:34.534489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-04T00:16:34.536828Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.109:2379"}
	{"level":"info","ts":"2024-07-04T00:16:34.537407Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-04T00:16:34.604272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-04T00:26:34.597279Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-07-04T00:26:34.606506Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":681,"took":"8.772508ms","hash":94187259,"current-db-size-bytes":2166784,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2166784,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-04T00:26:34.606572Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":94187259,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-07-04T00:30:23.820136Z","caller":"traceutil/trace.go:171","msg":"trace[1117379443] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"324.513659ms","start":"2024-07-04T00:30:23.49557Z","end":"2024-07-04T00:30:23.820084Z","steps":["trace[1117379443] 'process raft request'  (duration: 323.727296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:30:23.822067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:30:23.495548Z","time spent":"325.210623ms","remote":"127.0.0.1:50552","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-rkdna6sf4v2xxvc5dzeykns73u\" mod_revision:1102 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-rkdna6sf4v2xxvc5dzeykns73u\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-rkdna6sf4v2xxvc5dzeykns73u\" > >"}
	{"level":"warn","ts":"2024-07-04T00:30:24.214053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.167823ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2020023291260657032 > lease_revoke:<id:1c08907b199b5d40>","response":"size:28"}
	{"level":"info","ts":"2024-07-04T00:30:24.214291Z","caller":"traceutil/trace.go:171","msg":"trace[394202939] linearizableReadLoop","detail":"{readStateIndex:1291; appliedIndex:1290; }","duration":"351.005632ms","start":"2024-07-04T00:30:23.86326Z","end":"2024-07-04T00:30:24.214266Z","steps":["trace[394202939] 'read index received'  (duration: 85.34458ms)","trace[394202939] 'applied index is now lower than readState.Index'  (duration: 265.659589ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-04T00:30:24.214457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.177618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:30:24.21452Z","caller":"traceutil/trace.go:171","msg":"trace[1933102929] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1110; }","duration":"351.279826ms","start":"2024-07-04T00:30:23.863223Z","end":"2024-07-04T00:30:24.214503Z","steps":["trace[1933102929] 'agreement among raft nodes before linearized reading'  (duration: 351.179958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:30:24.214558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:30:23.863205Z","time spent":"351.340876ms","remote":"127.0.0.1:50280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-04T00:30:24.214492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.629577ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-04T00:30:24.214637Z","caller":"traceutil/trace.go:171","msg":"trace[830388096] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1110; }","duration":"155.777751ms","start":"2024-07-04T00:30:24.058847Z","end":"2024-07-04T00:30:24.214624Z","steps":["trace[830388096] 'agreement among raft nodes before linearized reading'  (duration: 155.608532ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-04T00:30:24.672705Z","caller":"traceutil/trace.go:171","msg":"trace[1881138029] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"314.772912ms","start":"2024-07-04T00:30:24.357914Z","end":"2024-07-04T00:30:24.672687Z","steps":["trace[1881138029] 'process raft request'  (duration: 314.297225ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-04T00:30:24.673183Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-04T00:30:24.357896Z","time spent":"315.190476ms","remote":"127.0.0.1:50462","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1109 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-04T00:30:24.860875Z","caller":"traceutil/trace.go:171","msg":"trace[554864257] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"239.304003ms","start":"2024-07-04T00:30:24.621544Z","end":"2024-07-04T00:30:24.860848Z","steps":["trace[554864257] 'process raft request'  (duration: 151.328719ms)","trace[554864257] 'compare'  (duration: 87.83794ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:30:35 up 19 min,  0 users,  load average: 0.29, 0.16, 0.11
	Linux no-preload-317739 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0df936d7e00e9322e90a0b31f31414a672951454f67442a8c77f4a7a5a798815] <==
	W0704 00:16:28.060128       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.091799       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.171952       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.255051       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.302726       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.305304       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.350681       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.362447       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.396780       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.441209       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.463511       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.498297       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.558402       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.558422       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.637086       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.665206       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.700192       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.784796       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.834037       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.866063       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:28.882702       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.024565       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.053183       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.179422       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0704 00:16:29.381097       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [3b5a2a6c13e28d8519a7ba32ea530fe4c3ff1182cfce22be0e8acf1db3d1d7b7] <==
	I0704 00:24:37.360423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:26:36.362914       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:26:36.363295       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0704 00:26:37.364295       1 handler_proxy.go:93] no RequestInfo found in the context
	W0704 00:26:37.364295       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:26:37.364610       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:26:37.364655       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0704 00:26:37.364699       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:26:37.366656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:27:37.365648       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:27:37.365819       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:27:37.365849       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:27:37.367055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:27:37.367129       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:27:37.367162       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:29:37.366578       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:29:37.366826       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0704 00:29:37.366839       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0704 00:29:37.367594       1 handler_proxy.go:93] no RequestInfo found in the context
	E0704 00:29:37.367729       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0704 00:29:37.368959       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ed6771156740826c8a68cf164f11c6b55eb62ee724278a38f2223b7e1c6a60e9] <==
	I0704 00:24:53.632512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:25:23.032939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:25:23.641195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:25:53.039569       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:25:53.651109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:26:23.045813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:26:23.659591       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:26:53.050772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:26:53.673533       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:27:23.057020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:23.682369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:27:53.063679       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:27:53.692197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0704 00:27:56.247812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="258.919µs"
	I0704 00:28:07.247796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="59.384µs"
	E0704 00:28:23.069564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:23.701441       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:28:53.076190       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:28:53.711262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:23.080810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:23.719812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:29:53.088608       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:29:53.731836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0704 00:30:23.095416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0704 00:30:23.744600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2b7e00135c8477fc491f1313c947f9a9ee937ea34895d5a7ac15f5ff47efa7a0] <==
	I0704 00:16:55.002890       1 server_linux.go:69] "Using iptables proxy"
	I0704 00:16:55.018002       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.109"]
	I0704 00:16:55.179464       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0704 00:16:55.181169       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0704 00:16:55.182480       1 server_linux.go:165] "Using iptables Proxier"
	I0704 00:16:55.197005       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0704 00:16:55.197561       1 server.go:872] "Version info" version="v1.30.2"
	I0704 00:16:55.197622       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0704 00:16:55.199293       1 config.go:192] "Starting service config controller"
	I0704 00:16:55.199423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0704 00:16:55.199486       1 config.go:101] "Starting endpoint slice config controller"
	I0704 00:16:55.199511       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0704 00:16:55.200422       1 config.go:319] "Starting node config controller"
	I0704 00:16:55.200751       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0704 00:16:55.301430       1 shared_informer.go:320] Caches are synced for node config
	I0704 00:16:55.301458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0704 00:16:55.301483       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fa9cbb6b523ab4a602415e29b81bb0bf151695eb30de1766e56d97f66f8a0035] <==
	W0704 00:16:36.492010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:36.492113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:36.492304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0704 00:16:36.492496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0704 00:16:36.492580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0704 00:16:36.493419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0704 00:16:36.493161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0704 00:16:36.493504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0704 00:16:36.493635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:36.493646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:36.495578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0704 00:16:36.495614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0704 00:16:36.495671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0704 00:16:36.495709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0704 00:16:36.505844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0704 00:16:36.506044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0704 00:16:37.414809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:37.414855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:37.472230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0704 00:16:37.472465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0704 00:16:37.481029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0704 00:16:37.481087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0704 00:16:38.015434       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0704 00:16:38.015485       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0704 00:16:39.967523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 04 00:27:41 no-preload-317739 kubelet[4350]: E0704 00:27:41.247768    4350 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tts4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-t28ff_kube-system(942f97bf-57cf-46fe-9a10-4a4171357239): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 04 00:27:41 no-preload-317739 kubelet[4350]: E0704 00:27:41.248093    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:27:56 no-preload-317739 kubelet[4350]: E0704 00:27:56.230272    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:28:07 no-preload-317739 kubelet[4350]: E0704 00:28:07.230133    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:28:19 no-preload-317739 kubelet[4350]: E0704 00:28:19.231728    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:28:34 no-preload-317739 kubelet[4350]: E0704 00:28:34.230836    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:28:39 no-preload-317739 kubelet[4350]: E0704 00:28:39.243230    4350 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:28:39 no-preload-317739 kubelet[4350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:28:39 no-preload-317739 kubelet[4350]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:28:39 no-preload-317739 kubelet[4350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:28:39 no-preload-317739 kubelet[4350]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:28:45 no-preload-317739 kubelet[4350]: E0704 00:28:45.230889    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:29:00 no-preload-317739 kubelet[4350]: E0704 00:29:00.230452    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:29:15 no-preload-317739 kubelet[4350]: E0704 00:29:15.230407    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:29:27 no-preload-317739 kubelet[4350]: E0704 00:29:27.229535    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]: E0704 00:29:39.234763    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]: E0704 00:29:39.242849    4350 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 04 00:29:39 no-preload-317739 kubelet[4350]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 04 00:29:53 no-preload-317739 kubelet[4350]: E0704 00:29:53.233161    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:30:05 no-preload-317739 kubelet[4350]: E0704 00:30:05.230211    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:30:18 no-preload-317739 kubelet[4350]: E0704 00:30:18.230917    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	Jul 04 00:30:32 no-preload-317739 kubelet[4350]: E0704 00:30:32.230730    4350 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-t28ff" podUID="942f97bf-57cf-46fe-9a10-4a4171357239"
	
	
	==> storage-provisioner [889c5e0513c8f01cc594fefe4055db41826b29d898c94b1955cc0b3c983afe3d] <==
	I0704 00:16:55.601796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0704 00:16:55.615242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0704 00:16:55.615438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0704 00:16:55.631043       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0704 00:16:55.631222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6!
	I0704 00:16:55.631805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"313ad966-e564-4b85-8ab5-68cd73b1d89f", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6 became leader
	I0704 00:16:55.731494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-317739_871dde78-cd1d-462e-b53b-8b0324e802e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-317739 -n no-preload-317739
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-317739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-t28ff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff: exit status 1 (64.614107ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-t28ff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-317739 describe pod metrics-server-569cc877fc-t28ff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (273.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
E0704 00:28:57.358077   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.59:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (228.864052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-979033" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-979033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-979033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.467µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-979033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (233.847867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-979033 logs -n 25: (1.760641942s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-768841 -- sudo                         | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-768841                                 | cert-options-768841          | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:00 UTC |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:00 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652205                           | kubernetes-upgrade-652205    | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:01 UTC |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:01 UTC | 04 Jul 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979438                              | cert-expiration-979438       | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-029653 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | disable-driver-mounts-029653                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:04 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317739             | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC | 04 Jul 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-687975            | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC | 04 Jul 24 00:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-995404  | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC | 04 Jul 24 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-979033        | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317739                  | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-317739                                   | no-preload-317739            | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-687975                 | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-687975                                  | embed-certs-687975           | jenkins | v1.33.1 | 04 Jul 24 00:05 UTC | 04 Jul 24 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-979033             | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC | 04 Jul 24 00:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-979033                              | old-k8s-version-979033       | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-995404       | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-995404 | jenkins | v1.33.1 | 04 Jul 24 00:07 UTC | 04 Jul 24 00:15 UTC |
	|         | default-k8s-diff-port-995404                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/04 00:07:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0704 00:07:02.474140   62905 out.go:291] Setting OutFile to fd 1 ...
	I0704 00:07:02.474416   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474427   62905 out.go:304] Setting ErrFile to fd 2...
	I0704 00:07:02.474431   62905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0704 00:07:02.474642   62905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0704 00:07:02.475219   62905 out.go:298] Setting JSON to false
	I0704 00:07:02.476307   62905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6562,"bootTime":1720045060,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0704 00:07:02.476381   62905 start.go:139] virtualization: kvm guest
	I0704 00:07:02.478637   62905 out.go:177] * [default-k8s-diff-port-995404] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0704 00:07:02.480018   62905 notify.go:220] Checking for updates...
	I0704 00:07:02.480039   62905 out.go:177]   - MINIKUBE_LOCATION=18998
	I0704 00:07:02.481260   62905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0704 00:07:02.482587   62905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:07:02.483820   62905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0704 00:07:02.484969   62905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0704 00:07:02.486122   62905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0704 00:07:02.487811   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:07:02.488453   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.488538   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.503924   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0704 00:07:02.504316   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.504904   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.504924   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.505253   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.505457   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.505724   62905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0704 00:07:02.506039   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:07:02.506081   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:07:02.521645   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0704 00:07:02.522115   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:07:02.522596   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:07:02.522618   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:07:02.522945   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:07:02.523144   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:07:02.557351   62905 out.go:177] * Using the kvm2 driver based on existing profile
	I0704 00:07:02.558600   62905 start.go:297] selected driver: kvm2
	I0704 00:07:02.558620   62905 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.558762   62905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0704 00:07:02.559468   62905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.559562   62905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0704 00:07:02.575228   62905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0704 00:07:02.575603   62905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:07:02.575680   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:07:02.575697   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:07:02.575749   62905 start.go:340] cluster config:
	{Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:07:02.575887   62905 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0704 00:07:02.577884   62905 out.go:177] * Starting "default-k8s-diff-port-995404" primary control-plane node in "default-k8s-diff-port-995404" cluster
	I0704 00:07:01.500168   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:02.579179   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:07:02.579227   62905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0704 00:07:02.579238   62905 cache.go:56] Caching tarball of preloaded images
	I0704 00:07:02.579331   62905 preload.go:173] Found /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0704 00:07:02.579344   62905 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0704 00:07:02.579446   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:07:02.579752   62905 start.go:360] acquireMachinesLock for default-k8s-diff-port-995404: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:07:07.580107   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:10.652249   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:16.732106   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:19.804162   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:25.884146   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:28.956241   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:35.036158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:38.108118   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:44.188129   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:47.260270   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:53.340147   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:07:56.412123   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:02.492156   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:05.564174   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:11.644195   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:14.716226   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:20.796193   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:23.868215   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:29.948219   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:33.020164   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:39.100138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:42.172138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:48.252157   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:51.324205   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:08:57.404167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:00.476183   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:06.556184   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:09.628167   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:15.708158   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:18.780202   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:24.860209   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:27.932273   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:34.012145   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:37.084155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:43.164171   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:46.236155   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:52.316187   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:55.388138   62043 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.109:22: connect: no route to host
	I0704 00:09:58.392192   62327 start.go:364] duration metric: took 4m4.42362175s to acquireMachinesLock for "embed-certs-687975"
	I0704 00:09:58.392250   62327 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:09:58.392266   62327 fix.go:54] fixHost starting: 
	I0704 00:09:58.392607   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:09:58.392633   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:09:58.408783   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0704 00:09:58.409328   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:09:58.409898   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:09:58.409918   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:09:58.410234   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:09:58.410438   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:09:58.410602   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:09:58.412175   62327 fix.go:112] recreateIfNeeded on embed-certs-687975: state=Stopped err=<nil>
	I0704 00:09:58.412200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	W0704 00:09:58.412361   62327 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:09:58.414467   62327 out.go:177] * Restarting existing kvm2 VM for "embed-certs-687975" ...
	I0704 00:09:58.415958   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Start
	I0704 00:09:58.416159   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring networks are active...
	I0704 00:09:58.417105   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network default is active
	I0704 00:09:58.417440   62327 main.go:141] libmachine: (embed-certs-687975) Ensuring network mk-embed-certs-687975 is active
	I0704 00:09:58.417879   62327 main.go:141] libmachine: (embed-certs-687975) Getting domain xml...
	I0704 00:09:58.418765   62327 main.go:141] libmachine: (embed-certs-687975) Creating domain...
	I0704 00:09:58.389743   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:09:58.389787   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390105   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:09:58.390132   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:09:58.390388   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:09:58.392051   62043 machine.go:97] duration metric: took 4m37.421604249s to provisionDockerMachine
	I0704 00:09:58.392103   62043 fix.go:56] duration metric: took 4m37.444018711s for fixHost
	I0704 00:09:58.392111   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 4m37.444044667s
	W0704 00:09:58.392131   62043 start.go:713] error starting host: provision: host is not running
	W0704 00:09:58.392245   62043 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0704 00:09:58.392263   62043 start.go:728] Will try again in 5 seconds ...
	I0704 00:09:59.657066   62327 main.go:141] libmachine: (embed-certs-687975) Waiting to get IP...
	I0704 00:09:59.657930   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.658398   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.658456   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.658368   63531 retry.go:31] will retry after 267.829987ms: waiting for machine to come up
	I0704 00:09:59.928142   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:09:59.928694   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:09:59.928720   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:09:59.928646   63531 retry.go:31] will retry after 240.308314ms: waiting for machine to come up
	I0704 00:10:00.170098   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.170541   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.170571   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.170481   63531 retry.go:31] will retry after 424.462623ms: waiting for machine to come up
	I0704 00:10:00.596288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:00.596726   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:00.596755   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:00.596671   63531 retry.go:31] will retry after 450.228437ms: waiting for machine to come up
	I0704 00:10:01.048174   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.048731   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.048758   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.048689   63531 retry.go:31] will retry after 583.591642ms: waiting for machine to come up
	I0704 00:10:01.633432   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:01.633773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:01.633806   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:01.633721   63531 retry.go:31] will retry after 789.480552ms: waiting for machine to come up
	I0704 00:10:02.424987   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:02.425388   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:02.425424   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:02.425329   63531 retry.go:31] will retry after 764.760669ms: waiting for machine to come up
	I0704 00:10:03.191570   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:03.191924   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:03.191953   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:03.191859   63531 retry.go:31] will retry after 1.415422425s: waiting for machine to come up
	I0704 00:10:03.392486   62043 start.go:360] acquireMachinesLock for no-preload-317739: {Name:mk4c099674fa2b439f09526eb17d9a7a4d495819 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0704 00:10:04.608804   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:04.609306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:04.609336   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:04.609244   63531 retry.go:31] will retry after 1.426962337s: waiting for machine to come up
	I0704 00:10:06.038152   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:06.038630   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:06.038685   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:06.038604   63531 retry.go:31] will retry after 1.511071665s: waiting for machine to come up
	I0704 00:10:07.551435   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:07.551977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:07.552000   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:07.551934   63531 retry.go:31] will retry after 2.275490025s: waiting for machine to come up
	I0704 00:10:09.829070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:09.829545   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:09.829577   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:09.829480   63531 retry.go:31] will retry after 3.272884116s: waiting for machine to come up
	I0704 00:10:13.103857   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:13.104320   62327 main.go:141] libmachine: (embed-certs-687975) DBG | unable to find current IP address of domain embed-certs-687975 in network mk-embed-certs-687975
	I0704 00:10:13.104356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | I0704 00:10:13.104267   63531 retry.go:31] will retry after 4.532823906s: waiting for machine to come up
	I0704 00:10:17.642356   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642900   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has current primary IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.642923   62327 main.go:141] libmachine: (embed-certs-687975) Found IP for machine: 192.168.39.213
	I0704 00:10:17.642935   62327 main.go:141] libmachine: (embed-certs-687975) Reserving static IP address...
	I0704 00:10:17.643368   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.643397   62327 main.go:141] libmachine: (embed-certs-687975) DBG | skip adding static IP to network mk-embed-certs-687975 - found existing host DHCP lease matching {name: "embed-certs-687975", mac: "52:54:00:ee:64:73", ip: "192.168.39.213"}
	I0704 00:10:17.643408   62327 main.go:141] libmachine: (embed-certs-687975) Reserved static IP address: 192.168.39.213
	I0704 00:10:17.643421   62327 main.go:141] libmachine: (embed-certs-687975) Waiting for SSH to be available...
	I0704 00:10:17.643433   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Getting to WaitForSSH function...
	I0704 00:10:17.645723   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646019   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.646047   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.646176   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH client type: external
	I0704 00:10:17.646199   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa (-rw-------)
	I0704 00:10:17.646264   62327 main.go:141] libmachine: (embed-certs-687975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:17.646288   62327 main.go:141] libmachine: (embed-certs-687975) DBG | About to run SSH command:
	I0704 00:10:17.646306   62327 main.go:141] libmachine: (embed-certs-687975) DBG | exit 0
	I0704 00:10:17.772683   62327 main.go:141] libmachine: (embed-certs-687975) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:17.773080   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetConfigRaw
	I0704 00:10:17.773695   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:17.776766   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777155   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.777197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.777469   62327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/config.json ...
	I0704 00:10:17.777698   62327 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:17.777721   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:17.777970   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.780304   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780636   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.780667   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.780800   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.780985   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781136   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.781354   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.781533   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.781729   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.781740   62327 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:17.884677   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:17.884711   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.884940   62327 buildroot.go:166] provisioning hostname "embed-certs-687975"
	I0704 00:10:17.884967   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:17.885180   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:17.887980   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888394   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:17.888417   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:17.888502   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:17.888758   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.888960   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:17.889102   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:17.889335   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:17.889538   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:17.889557   62327 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-687975 && echo "embed-certs-687975" | sudo tee /etc/hostname
	I0704 00:10:18.006597   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-687975
	
	I0704 00:10:18.006624   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.009477   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009772   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.009805   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.009942   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.010148   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010315   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.010485   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.010664   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.010821   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.010836   62327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-687975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-687975/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-687975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:18.121310   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:18.121350   62327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:18.121374   62327 buildroot.go:174] setting up certificates
	I0704 00:10:18.121395   62327 provision.go:84] configureAuth start
	I0704 00:10:18.121411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetMachineName
	I0704 00:10:18.121701   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:18.124118   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124499   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.124528   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.124646   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.126489   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126778   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.126802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.126913   62327 provision.go:143] copyHostCerts
	I0704 00:10:18.126987   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:18.127002   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:18.127090   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:18.127222   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:18.127232   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:18.127272   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:18.127348   62327 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:18.127357   62327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:18.127388   62327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:18.127461   62327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.embed-certs-687975 san=[127.0.0.1 192.168.39.213 embed-certs-687975 localhost minikube]
	I0704 00:10:18.451857   62327 provision.go:177] copyRemoteCerts
	I0704 00:10:18.451947   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:18.451980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.454696   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455051   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.455076   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.455301   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.455512   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.455675   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.455798   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.540053   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:18.566392   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:10:18.593268   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:10:18.619051   62327 provision.go:87] duration metric: took 497.642815ms to configureAuth
	I0704 00:10:18.619081   62327 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:18.619299   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:18.619386   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.621773   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622057   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.622087   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.622249   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.622475   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.622760   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.622971   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:18.623143   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:18.623160   62327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:19.141009   62670 start.go:364] duration metric: took 3m45.774576164s to acquireMachinesLock for "old-k8s-version-979033"
	I0704 00:10:19.141068   62670 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:19.141115   62670 fix.go:54] fixHost starting: 
	I0704 00:10:19.141561   62670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:19.141591   62670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:19.159844   62670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0704 00:10:19.160353   62670 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:19.160945   62670 main.go:141] libmachine: Using API Version  1
	I0704 00:10:19.160971   62670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:19.161347   62670 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:19.161640   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:19.161799   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetState
	I0704 00:10:19.163575   62670 fix.go:112] recreateIfNeeded on old-k8s-version-979033: state=Stopped err=<nil>
	I0704 00:10:19.163597   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	W0704 00:10:19.163753   62670 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:19.165906   62670 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-979033" ...
	I0704 00:10:18.904225   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:18.904256   62327 machine.go:97] duration metric: took 1.126543823s to provisionDockerMachine
	I0704 00:10:18.904269   62327 start.go:293] postStartSetup for "embed-certs-687975" (driver="kvm2")
	I0704 00:10:18.904283   62327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:18.904304   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:18.904626   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:18.904652   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:18.907391   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.907864   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:18.907915   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:18.908206   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:18.908453   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:18.908623   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:18.908768   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:18.991583   62327 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:18.996145   62327 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:18.996187   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:18.996255   62327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:18.996341   62327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:18.996443   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:19.006978   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:19.033605   62327 start.go:296] duration metric: took 129.322677ms for postStartSetup
	I0704 00:10:19.033643   62327 fix.go:56] duration metric: took 20.641387402s for fixHost
	I0704 00:10:19.033663   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.036302   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036813   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.036877   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.036919   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.037115   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037307   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.037488   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.037687   62327 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:19.037888   62327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0704 00:10:19.037905   62327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:19.140855   62327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051819.116387913
	
	I0704 00:10:19.140878   62327 fix.go:216] guest clock: 1720051819.116387913
	I0704 00:10:19.140885   62327 fix.go:229] Guest: 2024-07-04 00:10:19.116387913 +0000 UTC Remote: 2024-07-04 00:10:19.033646932 +0000 UTC m=+265.206951926 (delta=82.740981ms)
	I0704 00:10:19.140914   62327 fix.go:200] guest clock delta is within tolerance: 82.740981ms
	I0704 00:10:19.140920   62327 start.go:83] releasing machines lock for "embed-certs-687975", held for 20.748686488s
	I0704 00:10:19.140951   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.141280   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:19.144343   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144774   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.144802   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.144975   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145590   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145810   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:19.145896   62327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:19.145941   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.146048   62327 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:19.146074   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:19.148955   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.148977   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149312   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149339   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149470   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:19.149493   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:19.149555   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149755   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.149831   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:19.149921   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150094   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:19.150096   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.150293   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:19.150459   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:19.250910   62327 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:19.257541   62327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:19.413446   62327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:19.419871   62327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:19.419985   62327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:19.439141   62327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:19.439171   62327 start.go:494] detecting cgroup driver to use...
	I0704 00:10:19.439253   62327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:19.457474   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:19.479279   62327 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:19.479353   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:19.498771   62327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:19.513968   62327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:19.640950   62327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:19.817181   62327 docker.go:233] disabling docker service ...
	I0704 00:10:19.817248   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:19.838524   62327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:19.855479   62327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:19.976564   62327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:20.106140   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:20.121152   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:20.143893   62327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:10:20.143965   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.156806   62327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:20.156892   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.168660   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.180592   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.192151   62327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:20.204202   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.215502   62327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.235355   62327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:20.246834   62327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:20.264718   62327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:20.264786   62327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:20.280133   62327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:20.291521   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:20.416530   62327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:20.567852   62327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:20.567952   62327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:20.572992   62327 start.go:562] Will wait 60s for crictl version
	I0704 00:10:20.573052   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:10:20.577295   62327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:20.617746   62327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:20.617840   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.648158   62327 ssh_runner.go:195] Run: crio --version
	I0704 00:10:20.682039   62327 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:10:19.167360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .Start
	I0704 00:10:19.167575   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring networks are active...
	I0704 00:10:19.168591   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network default is active
	I0704 00:10:19.169064   62670 main.go:141] libmachine: (old-k8s-version-979033) Ensuring network mk-old-k8s-version-979033 is active
	I0704 00:10:19.169488   62670 main.go:141] libmachine: (old-k8s-version-979033) Getting domain xml...
	I0704 00:10:19.170309   62670 main.go:141] libmachine: (old-k8s-version-979033) Creating domain...
	I0704 00:10:20.487278   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting to get IP...
	I0704 00:10:20.488195   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.488679   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.488751   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.488643   63677 retry.go:31] will retry after 227.362639ms: waiting for machine to come up
	I0704 00:10:20.718322   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.718794   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.718820   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.718766   63677 retry.go:31] will retry after 266.291784ms: waiting for machine to come up
	I0704 00:10:20.986238   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:20.986779   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:20.986805   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:20.986726   63677 retry.go:31] will retry after 308.137887ms: waiting for machine to come up
	I0704 00:10:21.296450   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.297052   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.297085   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.297001   63677 retry.go:31] will retry after 400.976495ms: waiting for machine to come up
	I0704 00:10:21.699758   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:21.700266   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:21.700299   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:21.700227   63677 retry.go:31] will retry after 464.329709ms: waiting for machine to come up
	I0704 00:10:22.165905   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.166452   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.166482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.166393   63677 retry.go:31] will retry after 652.357119ms: waiting for machine to come up
	I0704 00:10:22.820302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:22.820777   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:22.820800   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:22.820725   63677 retry.go:31] will retry after 835.974316ms: waiting for machine to come up
	I0704 00:10:20.683820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetIP
	I0704 00:10:20.686663   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687040   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:20.687070   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:20.687312   62327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:20.691953   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:20.705149   62327 kubeadm.go:877] updating cluster {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:20.705368   62327 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:10:20.705433   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:20.748549   62327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:10:20.748613   62327 ssh_runner.go:195] Run: which lz4
	I0704 00:10:20.752991   62327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:20.757764   62327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:20.757810   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:22.395918   62327 crio.go:462] duration metric: took 1.642974021s to copy over tarball
	I0704 00:10:22.396029   62327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:23.658976   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:23.659482   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:23.659509   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:23.659432   63677 retry.go:31] will retry after 1.244693887s: waiting for machine to come up
	I0704 00:10:24.906359   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:24.906769   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:24.906801   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:24.906733   63677 retry.go:31] will retry after 1.212336933s: waiting for machine to come up
	I0704 00:10:26.121130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:26.121655   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:26.121684   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:26.121599   63677 retry.go:31] will retry after 1.622791006s: waiting for machine to come up
	I0704 00:10:27.745848   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:27.746399   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:27.746427   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:27.746349   63677 retry.go:31] will retry after 2.596558781s: waiting for machine to come up
	I0704 00:10:24.757599   62327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3615352s)
	I0704 00:10:24.757639   62327 crio.go:469] duration metric: took 2.361688123s to extract the tarball
	I0704 00:10:24.757650   62327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:24.796023   62327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:24.842665   62327 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:10:24.842691   62327 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:10:24.842699   62327 kubeadm.go:928] updating node { 192.168.39.213 8443 v1.30.2 crio true true} ...
	I0704 00:10:24.842805   62327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-687975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:24.842891   62327 ssh_runner.go:195] Run: crio config
	I0704 00:10:24.892918   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:24.892952   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:24.892979   62327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:24.893021   62327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-687975 NodeName:embed-certs-687975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:10:24.893288   62327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-687975"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:24.893372   62327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:10:24.905019   62327 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:24.905092   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:24.919465   62327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0704 00:10:24.942754   62327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:24.965089   62327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0704 00:10:24.988121   62327 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:24.993425   62327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:25.006830   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:25.145124   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:25.164000   62327 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975 for IP: 192.168.39.213
	I0704 00:10:25.164021   62327 certs.go:194] generating shared ca certs ...
	I0704 00:10:25.164036   62327 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:25.164285   62327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:25.164361   62327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:25.164375   62327 certs.go:256] generating profile certs ...
	I0704 00:10:25.164522   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/client.key
	I0704 00:10:25.164598   62327 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key.c5f2d6ca
	I0704 00:10:25.164657   62327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key
	I0704 00:10:25.164816   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:25.164875   62327 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:25.164889   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:25.164918   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:25.164949   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:25.164983   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:25.165049   62327 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:25.165801   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:25.203822   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:25.240795   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:25.273743   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:25.312678   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0704 00:10:25.339172   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:25.365805   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:25.392155   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/embed-certs-687975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:25.417662   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:25.445025   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:25.472697   62327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:25.505204   62327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:25.536867   62327 ssh_runner.go:195] Run: openssl version
	I0704 00:10:25.543487   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:25.555550   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560599   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.560678   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:25.566757   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:25.578244   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:25.590271   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595409   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.595475   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:25.601755   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:25.614572   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:25.627445   62327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632631   62327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.632688   62327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:25.639047   62327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:25.651199   62327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:25.656829   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:25.663869   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:25.670993   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:25.678309   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:25.685282   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:25.692383   62327 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:25.699625   62327 kubeadm.go:391] StartCluster: {Name:embed-certs-687975 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-687975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:25.700176   62327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:25.700240   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.744248   62327 cri.go:89] found id: ""
	I0704 00:10:25.744323   62327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:25.755623   62327 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:25.755643   62327 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:25.755648   62327 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:25.755697   62327 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:25.766631   62327 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:25.767627   62327 kubeconfig.go:125] found "embed-certs-687975" server: "https://192.168.39.213:8443"
	I0704 00:10:25.769625   62327 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:25.781667   62327 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.213
	I0704 00:10:25.781710   62327 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:25.781723   62327 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:25.781774   62327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:25.829584   62327 cri.go:89] found id: ""
	I0704 00:10:25.829669   62327 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:25.847738   62327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:25.859825   62327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:25.859864   62327 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:25.859931   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:25.869666   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:25.869722   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:25.879997   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:25.889905   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:25.889982   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:25.900023   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.909669   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:25.909733   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:25.919933   62327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:25.929422   62327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:25.929499   62327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:25.939577   62327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:25.949669   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:26.088494   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.367443   62327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278903285s)
	I0704 00:10:27.367492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.626929   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.739721   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:27.860860   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:27.860938   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.361670   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:30.344595   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:30.345134   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:30.345157   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:30.345089   63677 retry.go:31] will retry after 2.372913839s: waiting for machine to come up
	I0704 00:10:32.719441   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:32.719866   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | unable to find current IP address of domain old-k8s-version-979033 in network mk-old-k8s-version-979033
	I0704 00:10:32.719910   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | I0704 00:10:32.719827   63677 retry.go:31] will retry after 3.651406896s: waiting for machine to come up
	I0704 00:10:28.861698   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:28.883024   62327 api_server.go:72] duration metric: took 1.02216952s to wait for apiserver process to appear ...
	I0704 00:10:28.883057   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:10:28.883083   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:28.883625   62327 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0704 00:10:29.383561   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.679543   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:10:31.679578   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:10:31.679594   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.754659   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.754696   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:31.883935   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:31.927087   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:31.927130   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.383560   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.389095   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.389129   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:32.883827   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:32.890357   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:10:32.890385   62327 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:10:33.383944   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:10:33.388951   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:10:33.396092   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:10:33.396119   62327 api_server.go:131] duration metric: took 4.513054882s to wait for apiserver health ...
	I0704 00:10:33.396130   62327 cni.go:84] Creating CNI manager for ""
	I0704 00:10:33.396136   62327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:33.398181   62327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:10:33.399682   62327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:10:33.411938   62327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:10:33.436710   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:10:33.447604   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:10:33.447639   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:10:33.447649   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:10:33.447658   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:10:33.447663   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:10:33.447668   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:10:33.447673   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:10:33.447678   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:10:33.447682   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:10:33.447688   62327 system_pods.go:74] duration metric: took 10.954745ms to wait for pod list to return data ...
	I0704 00:10:33.447696   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:10:33.452408   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:10:33.452448   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:10:33.452460   62327 node_conditions.go:105] duration metric: took 4.757567ms to run NodePressure ...
	I0704 00:10:33.452476   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:33.724052   62327 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732188   62327 kubeadm.go:733] kubelet initialised
	I0704 00:10:33.732211   62327 kubeadm.go:734] duration metric: took 8.128083ms waiting for restarted kubelet to initialise ...
	I0704 00:10:33.732220   62327 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:33.739344   62327 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.746483   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746509   62327 pod_ready.go:81] duration metric: took 7.141056ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.746519   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.746526   62327 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.755457   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755489   62327 pod_ready.go:81] duration metric: took 8.954479ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.755502   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "etcd-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.755512   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.762439   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762476   62327 pod_ready.go:81] duration metric: took 6.95216ms for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.762489   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.762501   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:33.842246   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842281   62327 pod_ready.go:81] duration metric: took 79.767249ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:33.842294   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:33.842303   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.240034   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240061   62327 pod_ready.go:81] duration metric: took 397.745361ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.240070   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-proxy-9phtm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.240076   62327 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:34.640781   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640808   62327 pod_ready.go:81] duration metric: took 400.726608ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:34.640818   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:34.640823   62327 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:35.040614   62327 pod_ready.go:97] node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040646   62327 pod_ready.go:81] duration metric: took 399.813017ms for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:10:35.040656   62327 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-687975" hosting pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:35.040662   62327 pod_ready.go:38] duration metric: took 1.308435069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:35.040678   62327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:10:35.053971   62327 ops.go:34] apiserver oom_adj: -16
	I0704 00:10:35.053997   62327 kubeadm.go:591] duration metric: took 9.298343033s to restartPrimaryControlPlane
	I0704 00:10:35.054008   62327 kubeadm.go:393] duration metric: took 9.354393795s to StartCluster
	I0704 00:10:35.054028   62327 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.054114   62327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:35.055656   62327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:35.056019   62327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:10:35.056104   62327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:10:35.056189   62327 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-687975"
	I0704 00:10:35.056217   62327 config.go:182] Loaded profile config "embed-certs-687975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:35.056226   62327 addons.go:69] Setting default-storageclass=true in profile "embed-certs-687975"
	I0704 00:10:35.056234   62327 addons.go:69] Setting metrics-server=true in profile "embed-certs-687975"
	I0704 00:10:35.056256   62327 addons.go:234] Setting addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:35.056257   62327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-687975"
	W0704 00:10:35.056268   62327 addons.go:243] addon metrics-server should already be in state true
	I0704 00:10:35.056302   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056229   62327 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-687975"
	W0704 00:10:35.056354   62327 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:10:35.056383   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.056630   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056653   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056661   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056689   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.056702   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.056729   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.058101   62327 out.go:177] * Verifying Kubernetes components...
	I0704 00:10:35.059927   62327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:35.072266   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0704 00:10:35.072542   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0704 00:10:35.072699   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.072965   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.073191   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073229   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073455   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.073479   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.073608   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.073799   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.073838   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.074311   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.074344   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.076024   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0704 00:10:35.076434   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.076866   62327 addons.go:234] Setting addon default-storageclass=true in "embed-certs-687975"
	W0704 00:10:35.076884   62327 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:10:35.076905   62327 host.go:66] Checking if "embed-certs-687975" exists ...
	I0704 00:10:35.076965   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.076997   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.077241   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077273   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.077376   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.077901   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.077951   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.091096   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0704 00:10:35.091624   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.092231   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.092260   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.092643   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.092738   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0704 00:10:35.092820   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.093059   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.093555   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.093577   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.093913   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.094537   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.094743   62327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:35.094764   62327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:35.096976   62327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:35.098487   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0704 00:10:35.098597   62327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.098614   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:10:35.098632   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.098888   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.099368   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.099386   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.099749   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.100200   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.102539   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.103028   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103608   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.103637   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.103791   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.104008   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.104177   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.104316   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.104776   62327 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:10:35.106239   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:10:35.106260   62327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:10:35.106313   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.109978   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110458   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.110491   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.110684   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.110925   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.111025   62327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0704 00:10:35.111091   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.111227   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.111488   62327 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:35.111977   62327 main.go:141] libmachine: Using API Version  1
	I0704 00:10:35.112005   62327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:35.112295   62327 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:35.112482   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetState
	I0704 00:10:35.113980   62327 main.go:141] libmachine: (embed-certs-687975) Calling .DriverName
	I0704 00:10:35.114185   62327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.114203   62327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:10:35.114222   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHHostname
	I0704 00:10:35.117197   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.117777   62327 main.go:141] libmachine: (embed-certs-687975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:64:73", ip: ""} in network mk-embed-certs-687975: {Iface:virbr1 ExpiryTime:2024-07-04 01:10:09 +0000 UTC Type:0 Mac:52:54:00:ee:64:73 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-687975 Clientid:01:52:54:00:ee:64:73}
	I0704 00:10:35.117823   62327 main.go:141] libmachine: (embed-certs-687975) DBG | domain embed-certs-687975 has defined IP address 192.168.39.213 and MAC address 52:54:00:ee:64:73 in network mk-embed-certs-687975
	I0704 00:10:35.118056   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHPort
	I0704 00:10:35.118258   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHKeyPath
	I0704 00:10:35.118426   62327 main.go:141] libmachine: (embed-certs-687975) Calling .GetSSHUsername
	I0704 00:10:35.118562   62327 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/embed-certs-687975/id_rsa Username:docker}
	I0704 00:10:35.242007   62327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:35.267240   62327 node_ready.go:35] waiting up to 6m0s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:35.326233   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:10:35.329804   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:10:35.431863   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:10:35.431908   62327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:10:35.490138   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:10:35.490165   62327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:10:35.547996   62327 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:35.548021   62327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:10:35.578762   62327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:10:36.321372   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321411   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321432   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321448   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321794   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321808   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321812   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.321823   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321825   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.321834   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321833   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.321841   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.321854   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.321842   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.322111   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322142   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322153   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.322155   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.322182   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.322191   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.329094   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.329117   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.329531   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.329608   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.329625   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424191   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424216   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424645   62327 main.go:141] libmachine: (embed-certs-687975) DBG | Closing plugin on server side
	I0704 00:10:36.424676   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.424692   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.424707   62327 main.go:141] libmachine: Making call to close driver server
	I0704 00:10:36.424719   62327 main.go:141] libmachine: (embed-certs-687975) Calling .Close
	I0704 00:10:36.424987   62327 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:10:36.425000   62327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:10:36.425012   62327 addons.go:475] Verifying addon metrics-server=true in "embed-certs-687975"
	I0704 00:10:36.427165   62327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:10:37.761464   62905 start.go:364] duration metric: took 3m35.181652384s to acquireMachinesLock for "default-k8s-diff-port-995404"
	I0704 00:10:37.761548   62905 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:37.761575   62905 fix.go:54] fixHost starting: 
	I0704 00:10:37.761919   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:37.761952   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:37.779708   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0704 00:10:37.780347   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:37.780870   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:10:37.780895   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:37.781249   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:37.781513   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:37.781688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:10:37.783447   62905 fix.go:112] recreateIfNeeded on default-k8s-diff-port-995404: state=Stopped err=<nil>
	I0704 00:10:37.783495   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	W0704 00:10:37.783674   62905 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:37.785628   62905 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-995404" ...
	I0704 00:10:36.373099   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373583   62670 main.go:141] libmachine: (old-k8s-version-979033) Found IP for machine: 192.168.72.59
	I0704 00:10:36.373615   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has current primary IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.373628   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserving static IP address...
	I0704 00:10:36.374030   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.374068   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | skip adding static IP to network mk-old-k8s-version-979033 - found existing host DHCP lease matching {name: "old-k8s-version-979033", mac: "52:54:00:af:98:c8", ip: "192.168.72.59"}
	I0704 00:10:36.374082   62670 main.go:141] libmachine: (old-k8s-version-979033) Reserved static IP address: 192.168.72.59
	I0704 00:10:36.374113   62670 main.go:141] libmachine: (old-k8s-version-979033) Waiting for SSH to be available...
	I0704 00:10:36.374130   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Getting to WaitForSSH function...
	I0704 00:10:36.376363   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376711   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.376747   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.376945   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH client type: external
	I0704 00:10:36.376975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa (-rw-------)
	I0704 00:10:36.377011   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:36.377024   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | About to run SSH command:
	I0704 00:10:36.377062   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | exit 0
	I0704 00:10:36.504300   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:36.504681   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetConfigRaw
	I0704 00:10:36.505301   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.507826   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.508297   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.508605   62670 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/config.json ...
	I0704 00:10:36.508844   62670 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:36.508865   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:36.509148   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.511475   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.511792   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.511815   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.512017   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.512205   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512360   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.512502   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.512667   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.512836   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.512846   62670 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:36.616643   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:36.616673   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.616962   62670 buildroot.go:166] provisioning hostname "old-k8s-version-979033"
	I0704 00:10:36.616992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.617185   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.620028   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620368   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.620387   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.620727   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.620923   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621106   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.621240   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.621435   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.621601   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.621613   62670 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-979033 && echo "old-k8s-version-979033" | sudo tee /etc/hostname
	I0704 00:10:36.739589   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-979033
	
	I0704 00:10:36.739611   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.742386   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.742840   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.742867   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.743119   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:36.743348   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743578   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:36.743745   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:36.743925   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:36.744142   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:36.744169   62670 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-979033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-979033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-979033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:36.861561   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:36.861592   62670 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:36.861621   62670 buildroot.go:174] setting up certificates
	I0704 00:10:36.861632   62670 provision.go:84] configureAuth start
	I0704 00:10:36.861644   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetMachineName
	I0704 00:10:36.861928   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:36.864490   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.864975   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.865039   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.865137   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:36.867752   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868268   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:36.868302   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:36.868483   62670 provision.go:143] copyHostCerts
	I0704 00:10:36.868547   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:36.868560   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:36.868613   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:36.868747   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:36.868756   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:36.868783   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:36.868840   62670 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:36.868846   62670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:36.868863   62670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:36.868913   62670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-979033 san=[127.0.0.1 192.168.72.59 localhost minikube old-k8s-version-979033]
	I0704 00:10:37.072741   62670 provision.go:177] copyRemoteCerts
	I0704 00:10:37.072795   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:37.072821   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.075592   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.075937   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.075968   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.076159   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.076362   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.076541   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.076671   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.162730   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:37.194232   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0704 00:10:37.220644   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:37.246298   62670 provision.go:87] duration metric: took 384.653259ms to configureAuth
	I0704 00:10:37.246327   62670 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:37.246529   62670 config.go:182] Loaded profile config "old-k8s-version-979033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0704 00:10:37.246594   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.249101   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249491   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.249523   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.249774   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.249960   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250140   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.250350   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.250591   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.250831   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.250856   62670 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:37.522551   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:37.522602   62670 machine.go:97] duration metric: took 1.013718943s to provisionDockerMachine
	I0704 00:10:37.522616   62670 start.go:293] postStartSetup for "old-k8s-version-979033" (driver="kvm2")
	I0704 00:10:37.522626   62670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:37.522642   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.522965   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:37.522992   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.525421   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525718   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.525745   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.525988   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.526250   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.526428   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.526668   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.607305   62670 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:37.612104   62670 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:37.612128   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:37.612222   62670 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:37.612326   62670 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:37.612436   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:37.623597   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:37.650275   62670 start.go:296] duration metric: took 127.644599ms for postStartSetup
	I0704 00:10:37.650314   62670 fix.go:56] duration metric: took 18.50923577s for fixHost
	I0704 00:10:37.650333   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.652926   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653270   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.653298   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.653433   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.653650   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653836   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.653975   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.654124   62670 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:37.654344   62670 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.72.59 22 <nil> <nil>}
	I0704 00:10:37.654356   62670 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:37.761309   62670 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051837.729680185
	
	I0704 00:10:37.761333   62670 fix.go:216] guest clock: 1720051837.729680185
	I0704 00:10:37.761342   62670 fix.go:229] Guest: 2024-07-04 00:10:37.729680185 +0000 UTC Remote: 2024-07-04 00:10:37.650317632 +0000 UTC m=+244.428517044 (delta=79.362553ms)
	I0704 00:10:37.761363   62670 fix.go:200] guest clock delta is within tolerance: 79.362553ms
	I0704 00:10:37.761369   62670 start.go:83] releasing machines lock for "old-k8s-version-979033", held for 18.620323739s
	I0704 00:10:37.761421   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.761677   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:37.764522   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.764994   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.765019   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.765178   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765760   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.765951   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .DriverName
	I0704 00:10:37.766036   62670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:37.766085   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.766218   62670 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:37.766244   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHHostname
	I0704 00:10:37.769092   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769468   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769854   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769900   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.769927   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:37.769944   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:37.770066   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770286   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHPort
	I0704 00:10:37.770329   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770443   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHKeyPath
	I0704 00:10:37.770531   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770587   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetSSHUsername
	I0704 00:10:37.770720   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.770832   62670 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/old-k8s-version-979033/id_rsa Username:docker}
	I0704 00:10:37.873138   62670 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:37.879804   62670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:38.028009   62670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:38.034962   62670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:38.035030   62670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:38.057475   62670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:38.057511   62670 start.go:494] detecting cgroup driver to use...
	I0704 00:10:38.057579   62670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:38.074199   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:38.092880   62670 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:38.092932   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:38.106896   62670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:10:38.120887   62670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:10:38.250139   62670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:10:36.428467   62327 addons.go:510] duration metric: took 1.372366453s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:10:37.270816   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:38.405228   62670 docker.go:233] disabling docker service ...
	I0704 00:10:38.405288   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:10:38.421706   62670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:10:38.438033   62670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:10:38.586777   62670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:10:38.721090   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:10:38.736951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:10:38.757708   62670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0704 00:10:38.757782   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.769723   62670 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:10:38.769796   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.783408   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.796103   62670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:10:38.809130   62670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:10:38.822325   62670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:10:38.837968   62670 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:10:38.838038   62670 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:10:38.854343   62670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:10:38.866475   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:39.012506   62670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:10:39.177203   62670 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:10:39.177289   62670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:10:39.182557   62670 start.go:562] Will wait 60s for crictl version
	I0704 00:10:39.182643   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:39.187153   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:10:39.228774   62670 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:10:39.228851   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.261929   62670 ssh_runner.go:195] Run: crio --version
	I0704 00:10:39.295133   62670 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0704 00:10:37.787100   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Start
	I0704 00:10:37.787281   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring networks are active...
	I0704 00:10:37.788053   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network default is active
	I0704 00:10:37.788456   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Ensuring network mk-default-k8s-diff-port-995404 is active
	I0704 00:10:37.788965   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Getting domain xml...
	I0704 00:10:37.789842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Creating domain...
	I0704 00:10:39.119468   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting to get IP...
	I0704 00:10:39.120490   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121038   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.121123   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.121028   63853 retry.go:31] will retry after 205.838778ms: waiting for machine to come up
	I0704 00:10:39.328771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329372   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.329402   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.329310   63853 retry.go:31] will retry after 383.540497ms: waiting for machine to come up
	I0704 00:10:39.714729   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:39.715333   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:39.715239   63853 retry.go:31] will retry after 349.888862ms: waiting for machine to come up
	I0704 00:10:40.067018   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067629   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.067658   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.067518   63853 retry.go:31] will retry after 560.174181ms: waiting for machine to come up
	I0704 00:10:40.629108   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:40.629700   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:40.629568   63853 retry.go:31] will retry after 655.876993ms: waiting for machine to come up
	I0704 00:10:41.287664   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:41.288241   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:41.288163   63853 retry.go:31] will retry after 935.211949ms: waiting for machine to come up
	I0704 00:10:42.225062   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225501   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:42.225530   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:42.225448   63853 retry.go:31] will retry after 1.176205334s: waiting for machine to come up
	I0704 00:10:39.296618   62670 main.go:141] libmachine: (old-k8s-version-979033) Calling .GetIP
	I0704 00:10:39.299265   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299620   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:98:c8", ip: ""} in network mk-old-k8s-version-979033: {Iface:virbr4 ExpiryTime:2024-07-04 01:10:30 +0000 UTC Type:0 Mac:52:54:00:af:98:c8 Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:old-k8s-version-979033 Clientid:01:52:54:00:af:98:c8}
	I0704 00:10:39.299648   62670 main.go:141] libmachine: (old-k8s-version-979033) DBG | domain old-k8s-version-979033 has defined IP address 192.168.72.59 and MAC address 52:54:00:af:98:c8 in network mk-old-k8s-version-979033
	I0704 00:10:39.299857   62670 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0704 00:10:39.304490   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:39.318619   62670 kubeadm.go:877] updating cluster {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:10:39.318749   62670 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0704 00:10:39.318796   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:39.372343   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:39.372406   62670 ssh_runner.go:195] Run: which lz4
	I0704 00:10:39.376979   62670 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:10:39.382096   62670 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:10:39.382153   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0704 00:10:41.321459   62670 crio.go:462] duration metric: took 1.944522271s to copy over tarball
	I0704 00:10:41.321541   62670 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:10:39.272051   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:41.776436   62327 node_ready.go:53] node "embed-certs-687975" has status "Ready":"False"
	I0704 00:10:42.272096   62327 node_ready.go:49] node "embed-certs-687975" has status "Ready":"True"
	I0704 00:10:42.272126   62327 node_ready.go:38] duration metric: took 7.004853642s for node "embed-certs-687975" to be "Ready" ...
	I0704 00:10:42.272139   62327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:10:42.278133   62327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284704   62327 pod_ready.go:92] pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.284730   62327 pod_ready.go:81] duration metric: took 6.568077ms for pod "coredns-7db6d8ff4d-2bn7d" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.284740   62327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292234   62327 pod_ready.go:92] pod "etcd-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:42.292263   62327 pod_ready.go:81] duration metric: took 7.515519ms for pod "etcd-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:42.292276   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:43.403633   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404251   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:43.404302   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:43.404180   63853 retry.go:31] will retry after 1.24046978s: waiting for machine to come up
	I0704 00:10:44.646709   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647208   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:44.647234   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:44.647165   63853 retry.go:31] will retry after 1.631352494s: waiting for machine to come up
	I0704 00:10:46.280048   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280543   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:46.280574   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:46.280492   63853 retry.go:31] will retry after 1.855805317s: waiting for machine to come up
	I0704 00:10:44.545333   62670 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.223758075s)
	I0704 00:10:44.545366   62670 crio.go:469] duration metric: took 3.223876515s to extract the tarball
	I0704 00:10:44.545404   62670 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:10:44.589369   62670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:10:44.625017   62670 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0704 00:10:44.625055   62670 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:10:44.625143   62670 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.625161   62670 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.625191   62670 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.625372   62670 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.625393   62670 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.625146   62670 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.625223   62670 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.625700   62670 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627479   62670 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.627544   62670 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:44.627586   62670 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0704 00:10:44.627589   62670 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.627580   62670 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.627641   62670 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:44.627665   62670 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.773014   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821672   62670 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0704 00:10:44.821726   62670 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.821788   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.826460   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0704 00:10:44.841857   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.870213   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0704 00:10:44.895356   62670 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0704 00:10:44.895414   62670 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.895466   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.897160   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0704 00:10:44.901356   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0704 00:10:44.964305   62670 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0704 00:10:44.964356   62670 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0704 00:10:44.964404   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:44.964395   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0704 00:10:44.969048   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0704 00:10:44.982913   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:44.985558   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:44.990064   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:44.993167   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.015558   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0704 00:10:45.092189   62670 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0704 00:10:45.092237   62670 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.092309   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.104690   62670 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0704 00:10:45.104733   62670 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.104795   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130208   62670 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0704 00:10:45.130254   62670 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.130271   62670 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0704 00:10:45.130295   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.130337   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0704 00:10:45.130297   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0704 00:10:45.130298   62670 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.130442   62670 ssh_runner.go:195] Run: which crictl
	I0704 00:10:45.181491   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0704 00:10:45.181583   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0704 00:10:45.181598   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0704 00:10:45.181666   62670 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0704 00:10:45.234459   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0704 00:10:45.234563   62670 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0704 00:10:45.533133   62670 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:10:45.680954   62670 cache_images.go:92] duration metric: took 1.055880702s to LoadCachedImages
	W0704 00:10:45.681039   62670 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0704 00:10:45.681053   62670 kubeadm.go:928] updating node { 192.168.72.59 8443 v1.20.0 crio true true} ...
	I0704 00:10:45.681176   62670 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-979033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:10:45.681268   62670 ssh_runner.go:195] Run: crio config
	I0704 00:10:45.734964   62670 cni.go:84] Creating CNI manager for ""
	I0704 00:10:45.734992   62670 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:10:45.735009   62670 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:10:45.735034   62670 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-979033 NodeName:old-k8s-version-979033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0704 00:10:45.735206   62670 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-979033"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:10:45.735287   62670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0704 00:10:45.747614   62670 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:10:45.747700   62670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:10:45.759063   62670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0704 00:10:45.778439   62670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:10:45.798877   62670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0704 00:10:45.820513   62670 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0704 00:10:45.825346   62670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:10:45.839720   62670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:10:45.957373   62670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:10:45.975621   62670 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033 for IP: 192.168.72.59
	I0704 00:10:45.975645   62670 certs.go:194] generating shared ca certs ...
	I0704 00:10:45.975671   62670 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:45.975845   62670 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:10:45.975940   62670 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:10:45.975956   62670 certs.go:256] generating profile certs ...
	I0704 00:10:45.976086   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.key
	I0704 00:10:45.976184   62670 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key.03500654
	I0704 00:10:45.976236   62670 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key
	I0704 00:10:45.976376   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:10:45.976416   62670 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:10:45.976430   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:10:45.976468   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:10:45.976506   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:10:45.976541   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:10:45.976601   62670 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:45.977480   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:10:46.016391   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:10:46.062987   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:10:46.103769   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:10:46.143109   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0704 00:10:46.193832   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:10:46.223781   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:10:46.263822   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0704 00:10:46.298657   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:10:46.325454   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:10:46.351804   62670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:10:46.379279   62670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:10:46.397706   62670 ssh_runner.go:195] Run: openssl version
	I0704 00:10:46.404638   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:10:46.416778   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422402   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.422475   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:10:46.428803   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:10:46.441082   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:10:46.453211   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458313   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.458383   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:10:46.464706   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:10:46.476888   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:10:46.489083   62670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494780   62670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.494856   62670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:10:46.501321   62670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:10:46.513595   62670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:10:46.518722   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:10:46.525758   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:10:46.532590   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:10:46.540129   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:10:46.547113   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:10:46.553840   62670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:10:46.560502   62670 kubeadm.go:391] StartCluster: {Name:old-k8s-version-979033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-979033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:10:46.560590   62670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:10:46.560656   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.605334   62670 cri.go:89] found id: ""
	I0704 00:10:46.605411   62670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:10:46.619333   62670 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:10:46.619356   62670 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:10:46.619362   62670 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:10:46.619407   62670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:10:46.631203   62670 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:10:46.632519   62670 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-979033" does not appear in /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:10:46.633417   62670 kubeconfig.go:62] /home/jenkins/minikube-integration/18998-9396/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-979033" cluster setting kubeconfig missing "old-k8s-version-979033" context setting]
	I0704 00:10:46.634783   62670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:10:46.637143   62670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:10:46.649250   62670 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.59
	I0704 00:10:46.649285   62670 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:10:46.649297   62670 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:10:46.649351   62670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:10:46.691240   62670 cri.go:89] found id: ""
	I0704 00:10:46.691317   62670 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:10:46.710687   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:10:46.721650   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:10:46.721675   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:10:46.721728   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:10:46.731444   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:10:46.731517   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:10:46.741556   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:10:46.751544   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:10:46.751600   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:10:46.764187   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.775160   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:10:46.775224   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:10:46.785686   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:10:46.795475   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:10:46.795545   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:10:46.806960   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:10:46.818355   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:46.984379   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.639953   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:47.883263   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.001200   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:10:48.116034   62670 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:10:48.116121   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:45.284973   62327 pod_ready.go:102] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:46.800145   62327 pod_ready.go:92] pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.800170   62327 pod_ready.go:81] duration metric: took 4.507886037s for pod "kube-apiserver-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.800179   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805577   62327 pod_ready.go:92] pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.805599   62327 pod_ready.go:81] duration metric: took 5.413826ms for pod "kube-controller-manager-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.805611   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811066   62327 pod_ready.go:92] pod "kube-proxy-9phtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.811085   62327 pod_ready.go:81] duration metric: took 5.469666ms for pod "kube-proxy-9phtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.811094   62327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815670   62327 pod_ready.go:92] pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace has status "Ready":"True"
	I0704 00:10:46.815690   62327 pod_ready.go:81] duration metric: took 4.589606ms for pod "kube-scheduler-embed-certs-687975" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:46.815700   62327 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	I0704 00:10:48.822325   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:48.137949   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138359   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:48.138387   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:48.138307   63853 retry.go:31] will retry after 2.765241886s: waiting for machine to come up
	I0704 00:10:50.905039   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905688   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:50.905724   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:50.905624   63853 retry.go:31] will retry after 3.145956682s: waiting for machine to come up
	I0704 00:10:48.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.116898   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:49.617127   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.116442   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.617078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.117096   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:51.617176   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.116333   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:52.616675   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:53.116408   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:50.822990   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:52.823438   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:54.053147   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053593   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | unable to find current IP address of domain default-k8s-diff-port-995404 in network mk-default-k8s-diff-port-995404
	I0704 00:10:54.053630   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | I0704 00:10:54.053544   63853 retry.go:31] will retry after 4.352124904s: waiting for machine to come up
	I0704 00:10:53.616873   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.116661   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.616248   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.116316   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:55.616460   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.116311   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:56.616502   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.116856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:57.616948   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:58.117055   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:54.829173   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:57.322196   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:10:59.628966   62043 start.go:364] duration metric: took 56.236390336s to acquireMachinesLock for "no-preload-317739"
	I0704 00:10:59.629020   62043 start.go:96] Skipping create...Using existing machine configuration
	I0704 00:10:59.629029   62043 fix.go:54] fixHost starting: 
	I0704 00:10:59.629441   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:10:59.629483   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:10:59.649272   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0704 00:10:59.649745   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:10:59.650216   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:10:59.650245   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:10:59.650615   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:10:59.650807   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:10:59.650944   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:10:59.652724   62043 fix.go:112] recreateIfNeeded on no-preload-317739: state=Stopped err=<nil>
	I0704 00:10:59.652750   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	W0704 00:10:59.652901   62043 fix.go:138] unexpected machine state, will restart: <nil>
	I0704 00:10:59.655010   62043 out.go:177] * Restarting existing kvm2 VM for "no-preload-317739" ...
	I0704 00:10:59.656335   62043 main.go:141] libmachine: (no-preload-317739) Calling .Start
	I0704 00:10:59.656519   62043 main.go:141] libmachine: (no-preload-317739) Ensuring networks are active...
	I0704 00:10:59.657343   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network default is active
	I0704 00:10:59.657714   62043 main.go:141] libmachine: (no-preload-317739) Ensuring network mk-no-preload-317739 is active
	I0704 00:10:59.658209   62043 main.go:141] libmachine: (no-preload-317739) Getting domain xml...
	I0704 00:10:59.658812   62043 main.go:141] libmachine: (no-preload-317739) Creating domain...
	I0704 00:10:58.407312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407865   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Found IP for machine: 192.168.50.164
	I0704 00:10:58.407924   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has current primary IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.407935   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserving static IP address...
	I0704 00:10:58.408356   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Reserved static IP address: 192.168.50.164
	I0704 00:10:58.408378   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Waiting for SSH to be available...
	I0704 00:10:58.408396   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.408414   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | skip adding static IP to network mk-default-k8s-diff-port-995404 - found existing host DHCP lease matching {name: "default-k8s-diff-port-995404", mac: "52:54:00:ea:f6:7c", ip: "192.168.50.164"}
	I0704 00:10:58.408423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Getting to WaitForSSH function...
	I0704 00:10:58.410737   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411074   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.411103   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.411308   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH client type: external
	I0704 00:10:58.411344   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa (-rw-------)
	I0704 00:10:58.411384   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:10:58.411425   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | About to run SSH command:
	I0704 00:10:58.411445   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | exit 0
	I0704 00:10:58.532351   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | SSH cmd err, output: <nil>: 
	I0704 00:10:58.532719   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetConfigRaw
	I0704 00:10:58.533366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.536176   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536613   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.536640   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.536886   62905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/config.json ...
	I0704 00:10:58.537129   62905 machine.go:94] provisionDockerMachine start ...
	I0704 00:10:58.537149   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:58.537389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.539581   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.539946   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.539976   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.540099   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.540327   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.540785   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.540976   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.541155   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.541166   62905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:10:58.644667   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:10:58.644716   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.644986   62905 buildroot.go:166] provisioning hostname "default-k8s-diff-port-995404"
	I0704 00:10:58.645012   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.645256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.648091   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648519   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.648549   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.648691   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.648975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649174   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.649393   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.649608   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.649831   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.649857   62905 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-995404 && echo "default-k8s-diff-port-995404" | sudo tee /etc/hostname
	I0704 00:10:58.765130   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-995404
	
	I0704 00:10:58.765164   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.768571   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.768933   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.768961   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.769127   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.769343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.769675   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.769843   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:58.770014   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:58.770030   62905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-995404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-995404/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-995404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:10:58.877852   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:10:58.877885   62905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:10:58.877942   62905 buildroot.go:174] setting up certificates
	I0704 00:10:58.877955   62905 provision.go:84] configureAuth start
	I0704 00:10:58.877968   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetMachineName
	I0704 00:10:58.878318   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:58.880988   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881321   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.881349   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.881516   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.883893   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884213   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.884237   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.884398   62905 provision.go:143] copyHostCerts
	I0704 00:10:58.884459   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:10:58.884468   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:10:58.884523   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:10:58.884628   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:10:58.884639   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:10:58.884672   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:10:58.884747   62905 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:10:58.884757   62905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:10:58.884782   62905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:10:58.884838   62905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-995404 san=[127.0.0.1 192.168.50.164 default-k8s-diff-port-995404 localhost minikube]
	I0704 00:10:58.960337   62905 provision.go:177] copyRemoteCerts
	I0704 00:10:58.960408   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:10:58.960442   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:58.962980   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963389   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:58.963416   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:58.963585   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:58.963754   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:58.963905   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:58.964040   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.042670   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0704 00:10:59.073047   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:10:59.100579   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0704 00:10:59.127978   62905 provision.go:87] duration metric: took 250.007645ms to configureAuth
	I0704 00:10:59.128006   62905 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:10:59.128261   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:10:59.128363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.131470   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.131852   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.131906   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.132130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.132405   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132598   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.132771   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.132969   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.133176   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.133197   62905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:10:59.393756   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:10:59.393791   62905 machine.go:97] duration metric: took 856.647704ms to provisionDockerMachine
	I0704 00:10:59.393808   62905 start.go:293] postStartSetup for "default-k8s-diff-port-995404" (driver="kvm2")
	I0704 00:10:59.393822   62905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:10:59.393845   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.394143   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:10:59.394170   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.396996   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397335   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.397366   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.397556   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.397768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.397950   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.398094   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.479476   62905 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:10:59.484191   62905 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:10:59.484220   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:10:59.484291   62905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:10:59.484395   62905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:10:59.484540   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:10:59.495504   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:10:59.520952   62905 start.go:296] duration metric: took 127.128284ms for postStartSetup
	I0704 00:10:59.521006   62905 fix.go:56] duration metric: took 21.75944045s for fixHost
	I0704 00:10:59.521029   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.523896   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524210   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.524243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.524360   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.524586   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524777   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.524975   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.525166   62905 main.go:141] libmachine: Using SSH client type: native
	I0704 00:10:59.525322   62905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0704 00:10:59.525339   62905 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:10:59.628816   62905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051859.612598562
	
	I0704 00:10:59.628848   62905 fix.go:216] guest clock: 1720051859.612598562
	I0704 00:10:59.628857   62905 fix.go:229] Guest: 2024-07-04 00:10:59.612598562 +0000 UTC Remote: 2024-07-04 00:10:59.52101038 +0000 UTC m=+237.085876440 (delta=91.588182ms)
	I0704 00:10:59.628881   62905 fix.go:200] guest clock delta is within tolerance: 91.588182ms
	I0704 00:10:59.628887   62905 start.go:83] releasing machines lock for "default-k8s-diff-port-995404", held for 21.867375782s
	I0704 00:10:59.628917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.629243   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:10:59.632256   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.632656   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.632816   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633343   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633561   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:10:59.633655   62905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:10:59.633693   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.633774   62905 ssh_runner.go:195] Run: cat /version.json
	I0704 00:10:59.633792   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:10:59.636540   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636660   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.636943   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.636972   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637079   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:10:59.637097   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637107   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:10:59.637292   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:10:59.637295   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637491   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637498   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:10:59.637650   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:10:59.637654   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.637779   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:10:59.713988   62905 ssh_runner.go:195] Run: systemctl --version
	I0704 00:10:59.743264   62905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:10:59.895553   62905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:10:59.902538   62905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:10:59.902604   62905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:10:59.919858   62905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:10:59.919899   62905 start.go:494] detecting cgroup driver to use...
	I0704 00:10:59.919964   62905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:10:59.940739   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:10:59.961053   62905 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:10:59.961114   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:10:59.980549   62905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:00.002843   62905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:00.133319   62905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:00.307416   62905 docker.go:233] disabling docker service ...
	I0704 00:11:00.307484   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:00.325714   62905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:00.342008   62905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:00.469418   62905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:00.594775   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:00.612900   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:00.636854   62905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:00.636912   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.650940   62905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:00.651007   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.664849   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.678200   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.691929   62905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:00.708729   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.721874   62905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.747189   62905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:00.766255   62905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:00.778139   62905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:00.778208   62905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:00.794170   62905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:00.805772   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:00.945526   62905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:01.095767   62905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:01.095849   62905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:01.101337   62905 start.go:562] Will wait 60s for crictl version
	I0704 00:11:01.101410   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:11:01.105792   62905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:01.149911   62905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:01.149983   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.183494   62905 ssh_runner.go:195] Run: crio --version
	I0704 00:11:01.221773   62905 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:01.223142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetIP
	I0704 00:11:01.226142   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.226595   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:01.226626   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:01.227009   62905 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:01.231704   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:01.246258   62905 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:01.246373   62905 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:01.246414   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:01.288814   62905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:01.288885   62905 ssh_runner.go:195] Run: which lz4
	I0704 00:11:01.293591   62905 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0704 00:11:01.298567   62905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0704 00:11:01.298606   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0704 00:10:58.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.116577   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.617087   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.117110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:00.617014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.117093   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:01.616271   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.116809   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:02.617098   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:03.117166   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:10:59.323461   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:01.324078   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:03.824174   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:00.942384   62043 main.go:141] libmachine: (no-preload-317739) Waiting to get IP...
	I0704 00:11:00.943186   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:00.943675   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:00.943756   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:00.943653   64017 retry.go:31] will retry after 249.292607ms: waiting for machine to come up
	I0704 00:11:01.194377   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.194895   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.194954   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.194870   64017 retry.go:31] will retry after 262.613081ms: waiting for machine to come up
	I0704 00:11:01.459428   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.460003   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.460038   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.459944   64017 retry.go:31] will retry after 478.141622ms: waiting for machine to come up
	I0704 00:11:01.939357   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:01.939939   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:01.939974   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:01.939898   64017 retry.go:31] will retry after 536.153389ms: waiting for machine to come up
	I0704 00:11:02.477947   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:02.478481   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:02.478506   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:02.478420   64017 retry.go:31] will retry after 673.23866ms: waiting for machine to come up
	I0704 00:11:03.153142   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.153668   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.153700   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.153615   64017 retry.go:31] will retry after 826.785177ms: waiting for machine to come up
	I0704 00:11:03.981781   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:03.982279   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:03.982313   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:03.982215   64017 retry.go:31] will retry after 834.05017ms: waiting for machine to come up
	I0704 00:11:04.817689   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:04.818294   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:04.818323   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:04.818249   64017 retry.go:31] will retry after 1.153846982s: waiting for machine to come up
	I0704 00:11:02.979209   62905 crio.go:462] duration metric: took 1.685660087s to copy over tarball
	I0704 00:11:02.979307   62905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0704 00:11:05.406788   62905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427439702s)
	I0704 00:11:05.406816   62905 crio.go:469] duration metric: took 2.427578287s to extract the tarball
	I0704 00:11:05.406823   62905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0704 00:11:05.448710   62905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:05.498336   62905 crio.go:514] all images are preloaded for cri-o runtime.
	I0704 00:11:05.498367   62905 cache_images.go:84] Images are preloaded, skipping loading
	I0704 00:11:05.498375   62905 kubeadm.go:928] updating node { 192.168.50.164 8444 v1.30.2 crio true true} ...
	I0704 00:11:05.498487   62905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-995404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:05.498549   62905 ssh_runner.go:195] Run: crio config
	I0704 00:11:05.552676   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:05.552706   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:05.552717   62905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:05.552738   62905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-995404 NodeName:default-k8s-diff-port-995404 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:05.552895   62905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-995404"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:05.552966   62905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:05.564067   62905 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:05.564149   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:05.574991   62905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0704 00:11:05.597644   62905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:05.619456   62905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0704 00:11:05.640655   62905 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:05.644975   62905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:05.659570   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:05.800862   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:05.821044   62905 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404 for IP: 192.168.50.164
	I0704 00:11:05.821068   62905 certs.go:194] generating shared ca certs ...
	I0704 00:11:05.821087   62905 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:05.821258   62905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:05.821312   62905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:05.821324   62905 certs.go:256] generating profile certs ...
	I0704 00:11:05.821424   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.key
	I0704 00:11:05.821496   62905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key.4c35c707
	I0704 00:11:05.821547   62905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key
	I0704 00:11:05.821689   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:05.821729   62905 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:05.821741   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:05.821773   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:05.821800   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:05.821831   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:05.821893   62905 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:05.822753   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:05.867477   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:05.914405   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:05.952321   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:05.989578   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0704 00:11:06.031270   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0704 00:11:06.067171   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:06.096850   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:06.127959   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:06.156780   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:06.187472   62905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:06.216078   62905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:06.239490   62905 ssh_runner.go:195] Run: openssl version
	I0704 00:11:06.246358   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:06.259420   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266320   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.266394   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:06.273098   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:06.285864   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:06.298505   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303642   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.303734   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:06.310459   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:06.325238   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:06.342534   62905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349585   62905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.349659   62905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:06.358043   62905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:06.374741   62905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:06.380246   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:06.387593   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:06.394954   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:06.402600   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:06.409731   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:06.416688   62905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:06.423435   62905 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-995404 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-995404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:06.423559   62905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:06.423620   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.470763   62905 cri.go:89] found id: ""
	I0704 00:11:06.470846   62905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:06.482587   62905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:06.482611   62905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:06.482617   62905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:06.482667   62905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:06.497553   62905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:06.498625   62905 kubeconfig.go:125] found "default-k8s-diff-port-995404" server: "https://192.168.50.164:8444"
	I0704 00:11:06.500884   62905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:06.514955   62905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.164
	I0704 00:11:06.514990   62905 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:06.515004   62905 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:06.515063   62905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:06.560079   62905 cri.go:89] found id: ""
	I0704 00:11:06.560153   62905 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:06.579839   62905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:06.591817   62905 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:06.591845   62905 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:06.591939   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0704 00:11:06.602820   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:06.602891   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:06.615114   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0704 00:11:06.626812   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:06.626906   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:06.638990   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.650344   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:06.650412   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:06.662736   62905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0704 00:11:06.673392   62905 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:06.673468   62905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:06.684908   62905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:06.696008   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:06.827071   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:03.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:04.616945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.117085   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:05.616894   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.116767   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.616746   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.116615   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:07.616302   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.116699   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:06.324083   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:08.832523   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:05.974211   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:05.974953   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:05.974981   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:05.974853   64017 retry.go:31] will retry after 1.513213206s: waiting for machine to come up
	I0704 00:11:07.489878   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:07.490415   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:07.490447   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:07.490366   64017 retry.go:31] will retry after 1.861027199s: waiting for machine to come up
	I0704 00:11:09.353265   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:09.353877   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:09.353909   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:09.353788   64017 retry.go:31] will retry after 2.788986438s: waiting for machine to come up
	I0704 00:11:07.860520   62905 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.033413742s)
	I0704 00:11:07.860555   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.112931   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.199561   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:08.297827   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:08.297919   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:08.798666   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.299001   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.326939   62905 api_server.go:72] duration metric: took 1.029121669s to wait for apiserver process to appear ...
	I0704 00:11:09.326980   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:09.327006   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:09.327687   62905 api_server.go:269] stopped: https://192.168.50.164:8444/healthz: Get "https://192.168.50.164:8444/healthz": dial tcp 192.168.50.164:8444: connect: connection refused
	I0704 00:11:09.827140   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.356043   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:12.356074   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:12.356090   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.431816   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.431868   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:08.617011   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.116544   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:09.617105   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.117154   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:10.616678   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.117137   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:11.617077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.116897   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.617090   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:13.116877   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:12.827129   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:12.833217   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:12.833244   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.327458   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.335182   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:13.335216   62905 api_server.go:103] status: https://192.168.50.164:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:13.827833   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:11:13.833899   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:11:13.845708   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:13.845742   62905 api_server.go:131] duration metric: took 4.518754781s to wait for apiserver health ...
	I0704 00:11:13.845754   62905 cni.go:84] Creating CNI manager for ""
	I0704 00:11:13.845763   62905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:13.847527   62905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:11.322070   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.325898   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:13.848990   62905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:13.866061   62905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:13.895651   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:13.907155   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:13.907202   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0704 00:11:13.907214   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:13.907225   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:13.907236   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:13.907245   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:13.907255   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:13.907267   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:13.907278   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:13.907290   62905 system_pods.go:74] duration metric: took 11.616438ms to wait for pod list to return data ...
	I0704 00:11:13.907304   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:13.911071   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:13.911108   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:13.911121   62905 node_conditions.go:105] duration metric: took 3.808665ms to run NodePressure ...
	I0704 00:11:13.911142   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:14.227778   62905 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:14.232972   62905 kubeadm.go:733] kubelet initialised
	I0704 00:11:14.232999   62905 kubeadm.go:734] duration metric: took 5.196343ms waiting for restarted kubelet to initialise ...
	I0704 00:11:14.233008   62905 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:14.239587   62905 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.248503   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248527   62905 pod_ready.go:81] duration metric: took 8.915991ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.248536   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.248546   62905 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.252808   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252833   62905 pod_ready.go:81] duration metric: took 4.278735ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.252844   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.252850   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.257839   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257865   62905 pod_ready.go:81] duration metric: took 5.008527ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.257874   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.257881   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.300453   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300496   62905 pod_ready.go:81] duration metric: took 42.606835ms for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.300514   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.300532   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:14.699049   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699081   62905 pod_ready.go:81] duration metric: took 398.532074ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:14.699091   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-proxy-pplqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:14.699098   62905 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.099751   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099781   62905 pod_ready.go:81] duration metric: took 400.673785ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.099794   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.099802   62905 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:15.499381   62905 pod_ready.go:97] node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499415   62905 pod_ready.go:81] duration metric: took 399.604282ms for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:15.499430   62905 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-995404" hosting pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:15.499440   62905 pod_ready.go:38] duration metric: took 1.266419771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:15.499472   62905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:11:15.512486   62905 ops.go:34] apiserver oom_adj: -16
	I0704 00:11:15.512519   62905 kubeadm.go:591] duration metric: took 9.029896614s to restartPrimaryControlPlane
	I0704 00:11:15.512530   62905 kubeadm.go:393] duration metric: took 9.089103352s to StartCluster
	I0704 00:11:15.512545   62905 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.512620   62905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:11:15.514491   62905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:15.514770   62905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:11:15.514886   62905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:11:15.514995   62905 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515051   62905 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-995404"
	I0704 00:11:15.515054   62905 config.go:182] Loaded profile config "default-k8s-diff-port-995404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:11:15.515058   62905 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:11:15.515045   62905 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515098   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515108   62905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-995404"
	I0704 00:11:15.515100   62905 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-995404"
	I0704 00:11:15.515176   62905 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.515196   62905 addons.go:243] addon metrics-server should already be in state true
	I0704 00:11:15.515258   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.515473   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515517   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515554   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515521   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.515731   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.515773   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.517021   62905 out.go:177] * Verifying Kubernetes components...
	I0704 00:11:15.518682   62905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:15.532184   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0704 00:11:15.532716   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.533287   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.533318   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.533688   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.533710   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0704 00:11:15.533894   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.534143   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.534747   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.534774   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.535162   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.535835   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.535895   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.536774   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0704 00:11:15.537162   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.537690   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.537702   62905 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-995404"
	W0704 00:11:15.537715   62905 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:11:15.537719   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.537743   62905 host.go:66] Checking if "default-k8s-diff-port-995404" exists ...
	I0704 00:11:15.538134   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.538147   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538211   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.538756   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.538789   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.554800   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0704 00:11:15.554820   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0704 00:11:15.555279   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555417   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.555988   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556006   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556255   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.556276   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.556445   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556628   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.556637   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.556819   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.558057   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0704 00:11:15.558381   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.558768   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558842   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.558932   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.558950   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.559179   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.559587   62905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:11:15.559610   62905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:11:15.561573   62905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:15.561578   62905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:11:12.146246   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:12.146817   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:12.146844   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:12.146774   64017 retry.go:31] will retry after 2.705005802s: waiting for machine to come up
	I0704 00:11:14.853545   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:14.854045   62043 main.go:141] libmachine: (no-preload-317739) DBG | unable to find current IP address of domain no-preload-317739 in network mk-no-preload-317739
	I0704 00:11:14.854070   62043 main.go:141] libmachine: (no-preload-317739) DBG | I0704 00:11:14.854001   64017 retry.go:31] will retry after 3.923203683s: waiting for machine to come up
	I0704 00:11:15.563208   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:11:15.563233   62905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:11:15.563259   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.563282   62905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.563297   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:11:15.563312   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.567358   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567365   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567758   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567789   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.567823   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.567841   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.568374   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568472   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.568596   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568652   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.568744   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568833   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.568853   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.568955   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.578317   62905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0704 00:11:15.578737   62905 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:11:15.579322   62905 main.go:141] libmachine: Using API Version  1
	I0704 00:11:15.579343   62905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:11:15.579673   62905 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:11:15.579864   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetState
	I0704 00:11:15.582114   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .DriverName
	I0704 00:11:15.582330   62905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.582346   62905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:11:15.582363   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHHostname
	I0704 00:11:15.585542   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.585917   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:f6:7c", ip: ""} in network mk-default-k8s-diff-port-995404: {Iface:virbr2 ExpiryTime:2024-07-04 01:02:53 +0000 UTC Type:0 Mac:52:54:00:ea:f6:7c Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:default-k8s-diff-port-995404 Clientid:01:52:54:00:ea:f6:7c}
	I0704 00:11:15.585964   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | domain default-k8s-diff-port-995404 has defined IP address 192.168.50.164 and MAC address 52:54:00:ea:f6:7c in network mk-default-k8s-diff-port-995404
	I0704 00:11:15.586130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHPort
	I0704 00:11:15.586317   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHKeyPath
	I0704 00:11:15.586503   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .GetSSHUsername
	I0704 00:11:15.586677   62905 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/default-k8s-diff-port-995404/id_rsa Username:docker}
	I0704 00:11:15.713704   62905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:15.734147   62905 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:15.837690   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:11:15.858615   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:11:15.858645   62905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:11:15.883792   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:11:15.904371   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:11:15.904394   62905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:11:15.947164   62905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:15.947205   62905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:11:15.976721   62905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:11:16.926851   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089126041s)
	I0704 00:11:16.926885   62905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043064078s)
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926920   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.926909   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.926989   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927261   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927280   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927290   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927299   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927338   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.927382   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.927406   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.927415   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.927423   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.927989   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928013   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.928022   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928040   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.928118   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.928187   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.935023   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.935043   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.935367   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.935387   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963483   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963508   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.963834   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.963857   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.963866   62905 main.go:141] libmachine: Making call to close driver server
	I0704 00:11:16.963898   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) Calling .Close
	I0704 00:11:16.964130   62905 main.go:141] libmachine: (default-k8s-diff-port-995404) DBG | Closing plugin on server side
	I0704 00:11:16.964181   62905 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:11:16.964198   62905 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:11:16.964220   62905 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-995404"
	I0704 00:11:16.966338   62905 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:11:16.967695   62905 addons.go:510] duration metric: took 1.45282727s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:11:13.616762   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.116987   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:14.616559   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.117027   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.617171   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.117120   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:16.616978   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.116571   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:17.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:18.117113   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:15.822595   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.323016   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:18.782030   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782543   62043 main.go:141] libmachine: (no-preload-317739) Found IP for machine: 192.168.61.109
	I0704 00:11:18.782568   62043 main.go:141] libmachine: (no-preload-317739) Reserving static IP address...
	I0704 00:11:18.782585   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has current primary IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.782953   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.782982   62043 main.go:141] libmachine: (no-preload-317739) DBG | skip adding static IP to network mk-no-preload-317739 - found existing host DHCP lease matching {name: "no-preload-317739", mac: "52:54:00:2a:87:12", ip: "192.168.61.109"}
	I0704 00:11:18.782996   62043 main.go:141] libmachine: (no-preload-317739) Reserved static IP address: 192.168.61.109
	I0704 00:11:18.783014   62043 main.go:141] libmachine: (no-preload-317739) Waiting for SSH to be available...
	I0704 00:11:18.783031   62043 main.go:141] libmachine: (no-preload-317739) DBG | Getting to WaitForSSH function...
	I0704 00:11:18.785230   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785559   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.785593   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.785687   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH client type: external
	I0704 00:11:18.785742   62043 main.go:141] libmachine: (no-preload-317739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa (-rw-------)
	I0704 00:11:18.785770   62043 main.go:141] libmachine: (no-preload-317739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0704 00:11:18.785801   62043 main.go:141] libmachine: (no-preload-317739) DBG | About to run SSH command:
	I0704 00:11:18.785811   62043 main.go:141] libmachine: (no-preload-317739) DBG | exit 0
	I0704 00:11:18.908065   62043 main.go:141] libmachine: (no-preload-317739) DBG | SSH cmd err, output: <nil>: 
	I0704 00:11:18.908449   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetConfigRaw
	I0704 00:11:18.909142   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:18.911622   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912075   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.912125   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.912371   62043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/config.json ...
	I0704 00:11:18.912581   62043 machine.go:94] provisionDockerMachine start ...
	I0704 00:11:18.912599   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:18.912796   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:18.915233   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915675   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:18.915709   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:18.915971   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:18.916175   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:18.916488   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:18.916689   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:18.916853   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:18.916864   62043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0704 00:11:19.024629   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0704 00:11:19.024661   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.024913   62043 buildroot.go:166] provisioning hostname "no-preload-317739"
	I0704 00:11:19.024929   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.025143   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.028262   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028629   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.028653   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.028838   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.029042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029233   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.029381   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.029528   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.029696   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.029708   62043 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-317739 && echo "no-preload-317739" | sudo tee /etc/hostname
	I0704 00:11:19.148642   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-317739
	
	I0704 00:11:19.148679   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.151295   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.151766   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.151788   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.152030   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.152247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152438   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.152556   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.152733   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.152937   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.152953   62043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-317739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-317739/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-317739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0704 00:11:19.267475   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0704 00:11:19.267510   62043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9396/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9396/.minikube}
	I0704 00:11:19.267541   62043 buildroot.go:174] setting up certificates
	I0704 00:11:19.267553   62043 provision.go:84] configureAuth start
	I0704 00:11:19.267566   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetMachineName
	I0704 00:11:19.267936   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:19.270884   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271381   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.271409   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.271619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.274267   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274641   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.274665   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.274887   62043 provision.go:143] copyHostCerts
	I0704 00:11:19.274950   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem, removing ...
	I0704 00:11:19.274962   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem
	I0704 00:11:19.275030   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/key.pem (1679 bytes)
	I0704 00:11:19.275236   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem, removing ...
	I0704 00:11:19.275250   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem
	I0704 00:11:19.275284   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/ca.pem (1078 bytes)
	I0704 00:11:19.275360   62043 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem, removing ...
	I0704 00:11:19.275367   62043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem
	I0704 00:11:19.275387   62043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9396/.minikube/cert.pem (1123 bytes)
	I0704 00:11:19.275440   62043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem org=jenkins.no-preload-317739 san=[127.0.0.1 192.168.61.109 localhost minikube no-preload-317739]
	I0704 00:11:19.642077   62043 provision.go:177] copyRemoteCerts
	I0704 00:11:19.642133   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0704 00:11:19.642154   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.645168   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645553   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.645582   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.645803   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.646005   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.646189   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.646338   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:19.731637   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0704 00:11:19.758538   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0704 00:11:19.783554   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0704 00:11:19.809538   62043 provision.go:87] duration metric: took 541.971127ms to configureAuth
	I0704 00:11:19.809571   62043 buildroot.go:189] setting minikube options for container-runtime
	I0704 00:11:19.809800   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0704 00:11:19.809877   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:19.813528   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814000   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:19.814042   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:19.814213   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:19.814451   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814641   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:19.814831   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:19.815078   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:19.815287   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:19.815328   62043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0704 00:11:20.098956   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0704 00:11:20.098984   62043 machine.go:97] duration metric: took 1.186389847s to provisionDockerMachine
	I0704 00:11:20.098999   62043 start.go:293] postStartSetup for "no-preload-317739" (driver="kvm2")
	I0704 00:11:20.099011   62043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0704 00:11:20.099037   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.099367   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0704 00:11:20.099397   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.102274   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102624   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.102650   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.102870   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.103084   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.103254   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.103394   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.187063   62043 ssh_runner.go:195] Run: cat /etc/os-release
	I0704 00:11:20.192127   62043 info.go:137] Remote host: Buildroot 2023.02.9
	I0704 00:11:20.192159   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/addons for local assets ...
	I0704 00:11:20.192253   62043 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9396/.minikube/files for local assets ...
	I0704 00:11:20.192344   62043 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem -> 165742.pem in /etc/ssl/certs
	I0704 00:11:20.192451   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0704 00:11:20.202990   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:20.231649   62043 start.go:296] duration metric: took 132.636585ms for postStartSetup
	I0704 00:11:20.231689   62043 fix.go:56] duration metric: took 20.60266165s for fixHost
	I0704 00:11:20.231708   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.234708   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235099   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.235129   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.235376   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.235606   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.235813   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.236042   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.236254   62043 main.go:141] libmachine: Using SSH client type: native
	I0704 00:11:20.236447   62043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.61.109 22 <nil> <nil>}
	I0704 00:11:20.236460   62043 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0704 00:11:20.340846   62043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720051880.311820466
	
	I0704 00:11:20.340874   62043 fix.go:216] guest clock: 1720051880.311820466
	I0704 00:11:20.340883   62043 fix.go:229] Guest: 2024-07-04 00:11:20.311820466 +0000 UTC Remote: 2024-07-04 00:11:20.23169294 +0000 UTC m=+359.429189168 (delta=80.127526ms)
	I0704 00:11:20.340914   62043 fix.go:200] guest clock delta is within tolerance: 80.127526ms
	I0704 00:11:20.340938   62043 start.go:83] releasing machines lock for "no-preload-317739", held for 20.711925187s
	I0704 00:11:20.340963   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.341225   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:20.343787   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344146   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.344188   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.344360   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344810   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.344988   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:11:20.345061   62043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0704 00:11:20.345094   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.345221   62043 ssh_runner.go:195] Run: cat /version.json
	I0704 00:11:20.345247   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:11:20.347703   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.347924   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348121   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348150   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348307   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348396   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:20.348423   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:20.348487   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348562   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:11:20.348645   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348706   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:11:20.348764   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.348864   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:11:20.348994   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:11:20.425023   62043 ssh_runner.go:195] Run: systemctl --version
	I0704 00:11:20.456031   62043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0704 00:11:20.601693   62043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0704 00:11:20.609524   62043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0704 00:11:20.609617   62043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0704 00:11:20.628076   62043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0704 00:11:20.628105   62043 start.go:494] detecting cgroup driver to use...
	I0704 00:11:20.628180   62043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0704 00:11:20.646749   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0704 00:11:20.663882   62043 docker.go:217] disabling cri-docker service (if available) ...
	I0704 00:11:20.663954   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0704 00:11:20.679371   62043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0704 00:11:20.697131   62043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0704 00:11:20.820892   62043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0704 00:11:20.978815   62043 docker.go:233] disabling docker service ...
	I0704 00:11:20.978893   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0704 00:11:21.003649   62043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0704 00:11:21.018708   62043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0704 00:11:21.183699   62043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0704 00:11:21.356015   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0704 00:11:21.371775   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0704 00:11:21.397901   62043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0704 00:11:21.397977   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.410088   62043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0704 00:11:21.410175   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.422267   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.433879   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.446464   62043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0704 00:11:21.459090   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.474867   62043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.497013   62043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0704 00:11:21.508678   62043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0704 00:11:21.520003   62043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0704 00:11:21.520074   62043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0704 00:11:21.535778   62043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0704 00:11:21.546698   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:21.707980   62043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0704 00:11:21.855519   62043 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0704 00:11:21.855578   62043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0704 00:11:21.861422   62043 start.go:562] Will wait 60s for crictl version
	I0704 00:11:21.861487   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:21.865898   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0704 00:11:21.909151   62043 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0704 00:11:21.909231   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.940532   62043 ssh_runner.go:195] Run: crio --version
	I0704 00:11:21.971921   62043 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0704 00:11:17.738168   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:19.738513   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:22.238523   62905 node_ready.go:53] node "default-k8s-diff-port-995404" has status "Ready":"False"
	I0704 00:11:18.617104   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.116325   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:19.616537   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.116518   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.616709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.117177   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:21.617150   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.116980   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:22.616530   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:23.116838   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:20.824014   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.322845   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:21.973345   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetIP
	I0704 00:11:21.976425   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.976913   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:11:21.976941   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:11:21.977325   62043 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0704 00:11:21.982313   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:21.996098   62043 kubeadm.go:877] updating cluster {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0704 00:11:21.996252   62043 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0704 00:11:21.996296   62043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0704 00:11:22.032178   62043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0704 00:11:22.032210   62043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0704 00:11:22.032271   62043 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.032305   62043 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.032319   62043 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.032373   62043 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0704 00:11:22.032399   62043 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.032400   62043 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.032375   62043 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.032429   62043 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033814   62043 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0704 00:11:22.033826   62043 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.033847   62043 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.033812   62043 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.033815   62043 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.033912   62043 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:22.034052   62043 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.034138   62043 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.199984   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.209671   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.236796   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.240953   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.244893   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.260957   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.277666   62043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0704 00:11:22.277712   62043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.277764   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.311908   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0704 00:11:22.314095   62043 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0704 00:11:22.314137   62043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.314190   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.400926   62043 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0704 00:11:22.400964   62043 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.401011   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401043   62043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0704 00:11:22.401080   62043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.401121   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.401193   62043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0704 00:11:22.401219   62043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.401255   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.423931   62043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0704 00:11:22.423977   62043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.424024   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:22.424028   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0704 00:11:22.525952   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0704 00:11:22.525991   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0704 00:11:22.525961   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0704 00:11:22.526054   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0704 00:11:22.526136   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0704 00:11:22.526195   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0704 00:11:22.526285   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649104   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0704 00:11:22.649109   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0704 00:11:22.649215   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0704 00:11:22.649248   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:22.649268   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0704 00:11:22.649283   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:22.649217   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:22.649319   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:22.649349   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0704 00:11:22.649362   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649386   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0704 00:11:22.649414   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0704 00:11:22.649486   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:22.654629   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0704 00:11:22.661840   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0704 00:11:22.919526   62043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779714   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (3.130310457s)
	I0704 00:11:25.779744   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0704 00:11:25.779765   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779776   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (3.130431638s)
	I0704 00:11:25.779796   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (3.13049417s)
	I0704 00:11:25.779816   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0704 00:11:25.779817   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0704 00:11:25.779827   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0704 00:11:25.779856   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.130541061s)
	I0704 00:11:25.779869   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0704 00:11:25.779908   62043 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.860354689s)
	I0704 00:11:25.779936   62043 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0704 00:11:25.779958   62043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:25.779991   62043 ssh_runner.go:195] Run: which crictl
	I0704 00:11:23.248630   62905 node_ready.go:49] node "default-k8s-diff-port-995404" has status "Ready":"True"
	I0704 00:11:23.248671   62905 node_ready.go:38] duration metric: took 7.514485634s for node "default-k8s-diff-port-995404" to be "Ready" ...
	I0704 00:11:23.248683   62905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:23.257650   62905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272673   62905 pod_ready.go:92] pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.272706   62905 pod_ready.go:81] duration metric: took 15.025018ms for pod "coredns-7db6d8ff4d-jmq4s" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.272730   62905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277707   62905 pod_ready.go:92] pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.277738   62905 pod_ready.go:81] duration metric: took 4.999575ms for pod "etcd-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.277758   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282447   62905 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:23.282471   62905 pod_ready.go:81] duration metric: took 4.705643ms for pod "kube-apiserver-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:23.282481   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790312   62905 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.790337   62905 pod_ready.go:81] duration metric: took 1.507850095s for pod "kube-controller-manager-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.790346   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837961   62905 pod_ready.go:92] pod "kube-proxy-pplqq" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:24.837985   62905 pod_ready.go:81] duration metric: took 47.632749ms for pod "kube-proxy-pplqq" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:24.837994   62905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238771   62905 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:25.238800   62905 pod_ready.go:81] duration metric: took 400.798382ms for pod "kube-scheduler-default-k8s-diff-port-995404" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:25.238814   62905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:27.246820   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:23.616811   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.117212   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:24.616915   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.117183   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.616495   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.117078   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:26.617000   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.117057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:27.616823   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:28.116508   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:25.326734   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.823765   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:27.940196   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (2.160353743s)
	I0704 00:11:27.940226   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0704 00:11:27.940234   62043 ssh_runner.go:235] Completed: which crictl: (2.160222414s)
	I0704 00:11:27.940320   62043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:11:27.940253   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.940393   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0704 00:11:27.979809   62043 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0704 00:11:27.979954   62043 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:29.403572   62043 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.423593257s)
	I0704 00:11:29.403607   62043 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0704 00:11:29.403699   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (1.46328757s)
	I0704 00:11:29.403725   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0704 00:11:29.403761   62043 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.403822   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0704 00:11:29.247499   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:31.750339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:28.616737   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.117100   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:29.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.117145   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.617110   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.116945   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:31.616330   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.117101   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:32.616616   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:33.116964   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:30.322707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:32.323955   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.202513   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.798664869s)
	I0704 00:11:33.202547   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0704 00:11:33.202573   62043 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:33.202627   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0704 00:11:35.468074   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.26542461s)
	I0704 00:11:35.468099   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0704 00:11:35.468118   62043 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:35.468165   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0704 00:11:34.246217   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.246836   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:33.617132   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.117094   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.616914   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:35.617095   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.117232   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:36.617221   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.117109   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:37.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:38.116462   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:34.324255   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:36.823008   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.823183   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:37.443636   62043 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975448204s)
	I0704 00:11:37.443672   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0704 00:11:37.443706   62043 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:37.443759   62043 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0704 00:11:38.405813   62043 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18998-9396/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0704 00:11:38.405859   62043 cache_images.go:123] Successfully loaded all cached images
	I0704 00:11:38.405868   62043 cache_images.go:92] duration metric: took 16.373643393s to LoadCachedImages
	I0704 00:11:38.405886   62043 kubeadm.go:928] updating node { 192.168.61.109 8443 v1.30.2 crio true true} ...
	I0704 00:11:38.406011   62043 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-317739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0704 00:11:38.406077   62043 ssh_runner.go:195] Run: crio config
	I0704 00:11:38.452523   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:38.452552   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:38.452564   62043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0704 00:11:38.452585   62043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.109 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-317739 NodeName:no-preload-317739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0704 00:11:38.452729   62043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-317739"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0704 00:11:38.452788   62043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0704 00:11:38.463737   62043 binaries.go:44] Found k8s binaries, skipping transfer
	I0704 00:11:38.463815   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0704 00:11:38.473969   62043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0704 00:11:38.492719   62043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0704 00:11:38.510951   62043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0704 00:11:38.530396   62043 ssh_runner.go:195] Run: grep 192.168.61.109	control-plane.minikube.internal$ /etc/hosts
	I0704 00:11:38.534736   62043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0704 00:11:38.548662   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:11:38.668693   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:11:38.686552   62043 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739 for IP: 192.168.61.109
	I0704 00:11:38.686580   62043 certs.go:194] generating shared ca certs ...
	I0704 00:11:38.686601   62043 certs.go:226] acquiring lock for ca certs: {Name:mkae2a448a2cff5bf43b72af4c17010bf290e802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:11:38.686762   62043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key
	I0704 00:11:38.686815   62043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key
	I0704 00:11:38.686830   62043 certs.go:256] generating profile certs ...
	I0704 00:11:38.686955   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.key
	I0704 00:11:38.687015   62043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key.fbaaa8e5
	I0704 00:11:38.687048   62043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key
	I0704 00:11:38.687185   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem (1338 bytes)
	W0704 00:11:38.687241   62043 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574_empty.pem, impossibly tiny 0 bytes
	I0704 00:11:38.687253   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca-key.pem (1675 bytes)
	I0704 00:11:38.687283   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/ca.pem (1078 bytes)
	I0704 00:11:38.687310   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/cert.pem (1123 bytes)
	I0704 00:11:38.687336   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/certs/key.pem (1679 bytes)
	I0704 00:11:38.687384   62043 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem (1708 bytes)
	I0704 00:11:38.688258   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0704 00:11:38.731211   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0704 00:11:38.769339   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0704 00:11:38.803861   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0704 00:11:38.856375   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0704 00:11:38.903970   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0704 00:11:38.933988   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0704 00:11:38.962742   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0704 00:11:38.990067   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0704 00:11:39.017654   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/certs/16574.pem --> /usr/share/ca-certificates/16574.pem (1338 bytes)
	I0704 00:11:39.044418   62043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/ssl/certs/165742.pem --> /usr/share/ca-certificates/165742.pem (1708 bytes)
	I0704 00:11:39.073061   62043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0704 00:11:39.091979   62043 ssh_runner.go:195] Run: openssl version
	I0704 00:11:39.098299   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0704 00:11:39.110043   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115156   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.115229   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0704 00:11:39.122107   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0704 00:11:39.134113   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16574.pem && ln -fs /usr/share/ca-certificates/16574.pem /etc/ssl/certs/16574.pem"
	I0704 00:11:39.145947   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151296   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 23:00 /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.151367   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16574.pem
	I0704 00:11:39.158116   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16574.pem /etc/ssl/certs/51391683.0"
	I0704 00:11:39.170555   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165742.pem && ln -fs /usr/share/ca-certificates/165742.pem /etc/ssl/certs/165742.pem"
	I0704 00:11:39.182771   62043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187922   62043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 23:00 /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.187980   62043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165742.pem
	I0704 00:11:39.194397   62043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165742.pem /etc/ssl/certs/3ec20f2e.0"
	I0704 00:11:39.206665   62043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0704 00:11:39.212352   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0704 00:11:39.219422   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0704 00:11:39.226488   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0704 00:11:39.233503   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0704 00:11:39.241906   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0704 00:11:39.249915   62043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0704 00:11:39.256813   62043 kubeadm.go:391] StartCluster: {Name:no-preload-317739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-317739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0704 00:11:39.256922   62043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0704 00:11:39.256977   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.303203   62043 cri.go:89] found id: ""
	I0704 00:11:39.303281   62043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0704 00:11:39.315407   62043 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0704 00:11:39.315446   62043 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0704 00:11:39.315454   62043 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0704 00:11:39.315508   62043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0704 00:11:39.327630   62043 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0704 00:11:39.328741   62043 kubeconfig.go:125] found "no-preload-317739" server: "https://192.168.61.109:8443"
	I0704 00:11:39.330937   62043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0704 00:11:39.341998   62043 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.109
	I0704 00:11:39.342043   62043 kubeadm.go:1154] stopping kube-system containers ...
	I0704 00:11:39.342054   62043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0704 00:11:39.342111   62043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0704 00:11:39.388325   62043 cri.go:89] found id: ""
	I0704 00:11:39.388388   62043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0704 00:11:39.408800   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:11:39.419600   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:11:39.419627   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:11:39.419679   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:11:39.429630   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:11:39.429685   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:11:39.440630   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:11:39.451260   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:11:39.451331   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:11:39.462847   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.473571   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:11:39.473636   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:11:39.484558   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:11:39.494914   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:11:39.494983   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:11:39.505423   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:11:39.517115   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:39.634364   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.407653   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.607831   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:40.692358   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:38.746247   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:41.244978   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:38.616739   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.117077   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:39.616185   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.117134   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:40.616879   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.116543   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.616267   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.117061   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:42.617080   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:43.117099   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.323333   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.823117   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:40.848560   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:11:40.848652   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.349180   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.849767   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:41.870137   62043 api_server.go:72] duration metric: took 1.021586191s to wait for apiserver process to appear ...
	I0704 00:11:41.870167   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:11:41.870195   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:41.870657   62043 api_server.go:269] stopped: https://192.168.61.109:8443/healthz: Get "https://192.168.61.109:8443/healthz": dial tcp 192.168.61.109:8443: connect: connection refused
	I0704 00:11:42.371347   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.502396   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.502439   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.502477   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.536593   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0704 00:11:44.536636   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0704 00:11:44.870429   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:44.877522   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:44.877559   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.371097   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.375932   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0704 00:11:45.375970   62043 api_server.go:103] status: https://192.168.61.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0704 00:11:45.870776   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:11:45.880030   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:11:45.895702   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:11:45.895729   62043 api_server.go:131] duration metric: took 4.025556366s to wait for apiserver health ...
	I0704 00:11:45.895737   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:11:45.895743   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:11:45.897406   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:11:43.245949   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:43.616868   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.117083   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:44.617057   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.116941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:45.617066   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.117210   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:46.617116   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.116404   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:47.616609   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:48.116518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:48.116611   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:48.159432   62670 cri.go:89] found id: ""
	I0704 00:11:48.159464   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.159477   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:48.159486   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:48.159553   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:48.199101   62670 cri.go:89] found id: ""
	I0704 00:11:48.199136   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.199144   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:48.199152   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:48.199208   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:48.238058   62670 cri.go:89] found id: ""
	I0704 00:11:48.238079   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.238087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:48.238092   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:48.238145   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:46.322861   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.824946   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:45.898725   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:11:45.923585   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:11:45.943430   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:11:45.958774   62043 system_pods.go:59] 8 kube-system pods found
	I0704 00:11:45.958804   62043 system_pods.go:61] "coredns-7db6d8ff4d-pvtv9" [f03f871e-3b09-4fbb-96e5-3e71712dd2fb] Running
	I0704 00:11:45.958811   62043 system_pods.go:61] "etcd-no-preload-317739" [ad364ac9-924e-4e56-90c4-12cbf42c3e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0704 00:11:45.958824   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [2d503950-29dc-47b3-905a-afa85655ca7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0704 00:11:45.958832   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [a9cbe158-bf00-478c-8d70-7347e37d68a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0704 00:11:45.958837   62043 system_pods.go:61] "kube-proxy-ffmrg" [c710ce9d-c513-46b1-bcf8-1582d1974861] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0704 00:11:45.958841   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [07a488b3-7beb-4919-ad57-3f0b55a73bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0704 00:11:45.958846   62043 system_pods.go:61] "metrics-server-569cc877fc-qn22n" [378b139e-97d6-4dfa-9b56-99dda111ab31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:11:45.958857   62043 system_pods.go:61] "storage-provisioner" [66ecf6fc-5070-4374-a733-479b9b3cdc0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0704 00:11:45.958866   62043 system_pods.go:74] duration metric: took 15.413948ms to wait for pod list to return data ...
	I0704 00:11:45.958881   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:11:45.965318   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:11:45.965346   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:11:45.965355   62043 node_conditions.go:105] duration metric: took 6.466225ms to run NodePressure ...
	I0704 00:11:45.965371   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0704 00:11:46.324716   62043 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329924   62043 kubeadm.go:733] kubelet initialised
	I0704 00:11:46.329951   62043 kubeadm.go:734] duration metric: took 5.207276ms waiting for restarted kubelet to initialise ...
	I0704 00:11:46.329963   62043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:11:46.336531   62043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.341733   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341758   62043 pod_ready.go:81] duration metric: took 5.197122ms for pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.341769   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "coredns-7db6d8ff4d-pvtv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.341778   62043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.348317   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348341   62043 pod_ready.go:81] duration metric: took 6.552656ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.348349   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "etcd-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.348355   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.353840   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353864   62043 pod_ready.go:81] duration metric: took 5.503642ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.353873   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-apiserver-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.353878   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:46.362159   62043 pod_ready.go:97] node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362205   62043 pod_ready.go:81] duration metric: took 8.315884ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	E0704 00:11:46.362218   62043 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-317739" hosting pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-317739" has status "Ready":"False"
	I0704 00:11:46.362226   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148496   62043 pod_ready.go:92] pod "kube-proxy-ffmrg" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:47.148533   62043 pod_ready.go:81] duration metric: took 786.291174ms for pod "kube-proxy-ffmrg" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:47.148544   62043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:49.154946   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.246804   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:50.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:48.279472   62670 cri.go:89] found id: ""
	I0704 00:11:48.279510   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.279521   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:48.279529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:48.279598   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:48.316814   62670 cri.go:89] found id: ""
	I0704 00:11:48.316833   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.316843   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:48.316851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:48.316907   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:48.358196   62670 cri.go:89] found id: ""
	I0704 00:11:48.358230   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.358247   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:48.358252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:48.358310   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:48.404992   62670 cri.go:89] found id: ""
	I0704 00:11:48.405012   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.405019   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:48.405024   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:48.405092   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:48.444358   62670 cri.go:89] found id: ""
	I0704 00:11:48.444385   62670 logs.go:276] 0 containers: []
	W0704 00:11:48.444393   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:48.444401   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:48.444414   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:48.502426   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:48.502462   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:48.517885   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:48.517915   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:48.654987   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:48.655007   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:48.655022   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:48.719857   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:48.719908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.265451   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:51.279847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:51.279951   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:51.317907   62670 cri.go:89] found id: ""
	I0704 00:11:51.317942   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.317954   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:51.317963   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:51.318036   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:51.358329   62670 cri.go:89] found id: ""
	I0704 00:11:51.358361   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.358370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:51.358375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:51.358440   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:51.396389   62670 cri.go:89] found id: ""
	I0704 00:11:51.396418   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.396426   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:51.396433   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:51.396479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:51.433921   62670 cri.go:89] found id: ""
	I0704 00:11:51.433954   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.433964   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:51.433972   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:51.434030   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:51.472956   62670 cri.go:89] found id: ""
	I0704 00:11:51.472986   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.472997   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:51.473003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:51.473064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:51.511241   62670 cri.go:89] found id: ""
	I0704 00:11:51.511269   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.511277   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:51.511283   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:51.511330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:51.550622   62670 cri.go:89] found id: ""
	I0704 00:11:51.550647   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.550658   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:51.550665   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:51.550717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:51.595101   62670 cri.go:89] found id: ""
	I0704 00:11:51.595129   62670 logs.go:276] 0 containers: []
	W0704 00:11:51.595141   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:51.595152   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:51.595167   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:51.662852   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:51.662893   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:51.712755   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:51.712800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:51.774138   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:51.774181   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:51.789895   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:51.789925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:51.866376   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:51.325312   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.821791   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:51.156502   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:53.158089   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.656131   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:52.747469   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:55.248313   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:54.367005   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:54.382875   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:54.382938   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:54.419672   62670 cri.go:89] found id: ""
	I0704 00:11:54.419702   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.419713   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:54.419720   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:54.419790   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:54.464134   62670 cri.go:89] found id: ""
	I0704 00:11:54.464161   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.464170   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:54.464175   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:54.464233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:54.502825   62670 cri.go:89] found id: ""
	I0704 00:11:54.502848   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.502855   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:54.502861   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:54.502913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:54.542172   62670 cri.go:89] found id: ""
	I0704 00:11:54.542199   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.542207   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:54.542212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:54.542275   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:54.580488   62670 cri.go:89] found id: ""
	I0704 00:11:54.580517   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.580527   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:54.580534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:54.580600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:54.616925   62670 cri.go:89] found id: ""
	I0704 00:11:54.616950   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.616959   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:54.616965   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:54.617011   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:54.654388   62670 cri.go:89] found id: ""
	I0704 00:11:54.654416   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.654426   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:54.654434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:54.654492   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:54.697867   62670 cri.go:89] found id: ""
	I0704 00:11:54.697895   62670 logs.go:276] 0 containers: []
	W0704 00:11:54.697905   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:54.697916   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:54.697948   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:54.753899   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:54.753933   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:54.768684   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:54.768708   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:54.843026   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:54.843052   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:54.843069   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:54.920335   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:54.920388   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:57.463384   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:11:57.479721   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:11:57.479809   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:11:57.521845   62670 cri.go:89] found id: ""
	I0704 00:11:57.521931   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.521944   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:11:57.521952   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:11:57.522017   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:11:57.559595   62670 cri.go:89] found id: ""
	I0704 00:11:57.559626   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.559635   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:11:57.559642   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:11:57.559704   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:11:57.600881   62670 cri.go:89] found id: ""
	I0704 00:11:57.600906   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.600917   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:11:57.600923   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:11:57.600984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:11:57.646031   62670 cri.go:89] found id: ""
	I0704 00:11:57.646059   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.646068   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:11:57.646073   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:11:57.646141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:11:57.692031   62670 cri.go:89] found id: ""
	I0704 00:11:57.692057   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.692065   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:11:57.692071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:11:57.692118   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:11:57.730220   62670 cri.go:89] found id: ""
	I0704 00:11:57.730252   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.730263   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:11:57.730271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:11:57.730335   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:11:57.771323   62670 cri.go:89] found id: ""
	I0704 00:11:57.771350   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.771361   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:11:57.771369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:11:57.771441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:11:57.808590   62670 cri.go:89] found id: ""
	I0704 00:11:57.808617   62670 logs.go:276] 0 containers: []
	W0704 00:11:57.808625   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:11:57.808633   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:11:57.808644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:11:57.825034   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:11:57.825063   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:11:57.906713   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:11:57.906734   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:11:57.906746   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:11:57.988497   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:11:57.988533   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:11:58.056774   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:11:58.056805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:11:55.825329   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.322936   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.657693   62043 pod_ready.go:102] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:58.655007   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:11:58.655031   62043 pod_ready.go:81] duration metric: took 11.506481518s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:11:58.655040   62043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	I0704 00:12:00.662830   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:11:57.749330   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.244482   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:02.245230   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:00.609663   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:00.623785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:00.623851   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:00.669164   62670 cri.go:89] found id: ""
	I0704 00:12:00.669187   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.669194   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:00.669200   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:00.669253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:00.710018   62670 cri.go:89] found id: ""
	I0704 00:12:00.710044   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.710052   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:00.710057   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:00.710107   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:00.747778   62670 cri.go:89] found id: ""
	I0704 00:12:00.747803   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.747810   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:00.747815   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:00.747900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:00.787312   62670 cri.go:89] found id: ""
	I0704 00:12:00.787339   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.787347   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:00.787352   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:00.787399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:00.828018   62670 cri.go:89] found id: ""
	I0704 00:12:00.828049   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.828061   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:00.828070   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:00.828135   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:00.864695   62670 cri.go:89] found id: ""
	I0704 00:12:00.864723   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.864734   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:00.864742   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:00.864800   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:00.907804   62670 cri.go:89] found id: ""
	I0704 00:12:00.907833   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.907843   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:00.907850   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:00.907928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:00.951505   62670 cri.go:89] found id: ""
	I0704 00:12:00.951536   62670 logs.go:276] 0 containers: []
	W0704 00:12:00.951547   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:00.951557   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:00.951573   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:00.997067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:00.997115   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:01.049321   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:01.049356   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:01.066878   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:01.066908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:01.152888   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:01.152919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:01.152935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:00.823441   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.322789   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.161704   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:05.662715   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:04.247328   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:06.746227   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:03.737731   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:03.753151   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:03.753244   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:03.816045   62670 cri.go:89] found id: ""
	I0704 00:12:03.816076   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.816087   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:03.816095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:03.816154   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:03.857041   62670 cri.go:89] found id: ""
	I0704 00:12:03.857070   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.857081   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:03.857088   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:03.857152   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:03.896734   62670 cri.go:89] found id: ""
	I0704 00:12:03.896763   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.896774   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:03.896781   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:03.896836   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:03.936142   62670 cri.go:89] found id: ""
	I0704 00:12:03.936168   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.936178   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:03.936183   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:03.936258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:03.974599   62670 cri.go:89] found id: ""
	I0704 00:12:03.974623   62670 logs.go:276] 0 containers: []
	W0704 00:12:03.974631   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:03.974636   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:03.974686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:04.012822   62670 cri.go:89] found id: ""
	I0704 00:12:04.012851   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.012859   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:04.012865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:04.012999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:04.051360   62670 cri.go:89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.051411   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:04.051420   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:04.051485   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:04.090587   62670 cri.go:89] found id: ""
	I0704 00:12:04.090616   62670 logs.go:276] 0 containers: []
	W0704 00:12:04.090627   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:04.090638   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:04.090654   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:04.167427   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:04.167450   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:04.167465   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:04.250550   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:04.250594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:04.299970   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:04.300003   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:04.352960   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:04.352994   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:06.871729   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:06.884948   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:06.885027   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:06.920910   62670 cri.go:89] found id: ""
	I0704 00:12:06.920939   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.920950   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:06.920957   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:06.921024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:06.958701   62670 cri.go:89] found id: ""
	I0704 00:12:06.958731   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.958742   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:06.958750   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:06.958808   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:06.997468   62670 cri.go:89] found id: ""
	I0704 00:12:06.997499   62670 logs.go:276] 0 containers: []
	W0704 00:12:06.997509   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:06.997515   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:06.997564   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:07.033767   62670 cri.go:89] found id: ""
	I0704 00:12:07.033795   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.033806   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:07.033814   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:07.033896   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:07.074189   62670 cri.go:89] found id: ""
	I0704 00:12:07.074218   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.074229   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:07.074241   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:07.074307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:07.110517   62670 cri.go:89] found id: ""
	I0704 00:12:07.110544   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.110554   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:07.110562   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:07.110615   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:07.146600   62670 cri.go:89] found id: ""
	I0704 00:12:07.146627   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.146635   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:07.146641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:07.146690   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:07.180799   62670 cri.go:89] found id: ""
	I0704 00:12:07.180826   62670 logs.go:276] 0 containers: []
	W0704 00:12:07.180834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:07.180843   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:07.180859   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:07.222473   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:07.222503   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:07.281453   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:07.281498   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:07.296335   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:07.296364   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:07.375751   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:07.375782   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:07.375805   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:05.323723   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.822320   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:07.663501   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:10.163774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.247753   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:11.746082   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:09.954585   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:09.970379   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:09.970470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:10.011987   62670 cri.go:89] found id: ""
	I0704 00:12:10.012017   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.012028   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:10.012035   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:10.012102   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:10.054940   62670 cri.go:89] found id: ""
	I0704 00:12:10.054971   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.054982   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:10.054989   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:10.055051   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:10.096048   62670 cri.go:89] found id: ""
	I0704 00:12:10.096079   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.096087   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:10.096093   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:10.096143   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:10.141795   62670 cri.go:89] found id: ""
	I0704 00:12:10.141818   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.141826   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:10.141831   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:10.141892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:10.188257   62670 cri.go:89] found id: ""
	I0704 00:12:10.188283   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.188295   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:10.188302   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:10.188369   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:10.249134   62670 cri.go:89] found id: ""
	I0704 00:12:10.249157   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.249167   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:10.249174   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:10.249233   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:10.309586   62670 cri.go:89] found id: ""
	I0704 00:12:10.309611   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.309622   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:10.309632   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:10.309689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:10.351027   62670 cri.go:89] found id: ""
	I0704 00:12:10.351054   62670 logs.go:276] 0 containers: []
	W0704 00:12:10.351065   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:10.351074   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:10.351086   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:10.404371   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:10.404411   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:10.419379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:10.419410   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:10.502977   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:10.503001   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:10.503017   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:10.582149   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:10.582185   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:13.122828   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:13.138522   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:13.138591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:13.181603   62670 cri.go:89] found id: ""
	I0704 00:12:13.181634   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.181645   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:13.181653   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:13.181711   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:13.219066   62670 cri.go:89] found id: ""
	I0704 00:12:13.219090   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.219098   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:13.219103   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:13.219159   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:09.822778   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.322555   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:12.165249   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.663051   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:14.248889   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.746104   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:13.259570   62670 cri.go:89] found id: ""
	I0704 00:12:13.259591   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.259599   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:13.259604   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:13.259658   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:13.301577   62670 cri.go:89] found id: ""
	I0704 00:12:13.301605   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.301617   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:13.301625   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:13.301689   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:13.339546   62670 cri.go:89] found id: ""
	I0704 00:12:13.339570   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.339584   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:13.339592   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:13.339649   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:13.378631   62670 cri.go:89] found id: ""
	I0704 00:12:13.378654   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.378665   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:13.378672   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:13.378733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:13.416818   62670 cri.go:89] found id: ""
	I0704 00:12:13.416843   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.416851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:13.416856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:13.416908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:13.452538   62670 cri.go:89] found id: ""
	I0704 00:12:13.452562   62670 logs.go:276] 0 containers: []
	W0704 00:12:13.452570   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:13.452579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:13.452590   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:13.505556   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:13.505594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:13.522506   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:13.522542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:13.604513   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:13.604536   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:13.604553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:13.681501   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:13.681536   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.222955   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:16.241979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:16.242086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:16.299662   62670 cri.go:89] found id: ""
	I0704 00:12:16.299690   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.299702   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:16.299710   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:16.299772   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:16.342898   62670 cri.go:89] found id: ""
	I0704 00:12:16.342934   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.342944   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:16.342952   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:16.343014   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:16.382387   62670 cri.go:89] found id: ""
	I0704 00:12:16.382408   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.382416   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:16.382422   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:16.382482   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:16.421830   62670 cri.go:89] found id: ""
	I0704 00:12:16.421852   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.421861   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:16.421874   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:16.421934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:16.459248   62670 cri.go:89] found id: ""
	I0704 00:12:16.459272   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.459282   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:16.459289   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:16.459347   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:16.494675   62670 cri.go:89] found id: ""
	I0704 00:12:16.494704   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.494714   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:16.494725   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:16.494789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:16.534319   62670 cri.go:89] found id: ""
	I0704 00:12:16.534344   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.534352   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:16.534358   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:16.534407   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:16.571422   62670 cri.go:89] found id: ""
	I0704 00:12:16.571455   62670 logs.go:276] 0 containers: []
	W0704 00:12:16.571467   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:16.571478   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:16.571493   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:16.651019   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:16.651040   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:16.651058   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:16.726538   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:16.726574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:16.771114   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:16.771145   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:16.824495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:16.824532   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:14.323436   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.822647   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.823509   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:16.666213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.162586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:18.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:20.747743   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:19.340941   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:19.355501   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:19.355580   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:19.396845   62670 cri.go:89] found id: ""
	I0704 00:12:19.396872   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.396882   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:19.396902   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:19.396962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:19.440805   62670 cri.go:89] found id: ""
	I0704 00:12:19.440835   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.440845   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:19.440852   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:19.440913   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:19.477781   62670 cri.go:89] found id: ""
	I0704 00:12:19.477809   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.477820   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:19.477827   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:19.477890   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:19.513042   62670 cri.go:89] found id: ""
	I0704 00:12:19.513067   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.513077   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:19.513084   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:19.513142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:19.547775   62670 cri.go:89] found id: ""
	I0704 00:12:19.547804   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.547812   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:19.547818   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:19.547867   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:19.586103   62670 cri.go:89] found id: ""
	I0704 00:12:19.586131   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.586142   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:19.586149   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:19.586219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:19.625529   62670 cri.go:89] found id: ""
	I0704 00:12:19.625556   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.625567   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:19.625574   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:19.625644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:19.663835   62670 cri.go:89] found id: ""
	I0704 00:12:19.663860   62670 logs.go:276] 0 containers: []
	W0704 00:12:19.663870   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:19.663903   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:19.663919   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:19.719204   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:19.719245   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:19.733871   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:19.733909   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:19.817212   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:19.817240   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:19.817260   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:19.894555   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:19.894595   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.438204   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:22.451438   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:22.451507   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:22.489196   62670 cri.go:89] found id: ""
	I0704 00:12:22.489219   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.489226   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:22.489232   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:22.489278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:22.523870   62670 cri.go:89] found id: ""
	I0704 00:12:22.523917   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.523929   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:22.523936   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:22.523992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:22.564799   62670 cri.go:89] found id: ""
	I0704 00:12:22.564827   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.564839   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:22.564846   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:22.564905   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:22.603993   62670 cri.go:89] found id: ""
	I0704 00:12:22.604019   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.604027   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:22.604033   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:22.604086   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:22.639749   62670 cri.go:89] found id: ""
	I0704 00:12:22.639780   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.639791   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:22.639799   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:22.639855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:22.678173   62670 cri.go:89] found id: ""
	I0704 00:12:22.678206   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.678214   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:22.678227   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:22.678279   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:22.718934   62670 cri.go:89] found id: ""
	I0704 00:12:22.718962   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.718971   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:22.718977   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:22.719029   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:22.756334   62670 cri.go:89] found id: ""
	I0704 00:12:22.756362   62670 logs.go:276] 0 containers: []
	W0704 00:12:22.756373   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:22.756383   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:22.756397   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:22.835079   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:22.835113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:22.877138   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:22.877170   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:22.930427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:22.930466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:22.945810   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:22.945838   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:23.021251   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:21.323951   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.822002   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:21.165297   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.661688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:23.245394   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.748364   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:25.522380   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:25.536705   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:25.536776   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:25.575126   62670 cri.go:89] found id: ""
	I0704 00:12:25.575154   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.575162   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:25.575168   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:25.575223   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:25.612447   62670 cri.go:89] found id: ""
	I0704 00:12:25.612480   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.612488   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:25.612494   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:25.612542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:25.651652   62670 cri.go:89] found id: ""
	I0704 00:12:25.651677   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.651688   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:25.651696   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:25.651751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:25.690007   62670 cri.go:89] found id: ""
	I0704 00:12:25.690034   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.690042   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:25.690049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:25.690105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:25.725041   62670 cri.go:89] found id: ""
	I0704 00:12:25.725093   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.725106   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:25.725114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:25.725196   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:25.766324   62670 cri.go:89] found id: ""
	I0704 00:12:25.766350   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.766361   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:25.766369   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:25.766430   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:25.803515   62670 cri.go:89] found id: ""
	I0704 00:12:25.803540   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.803548   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:25.803553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:25.803613   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:25.845016   62670 cri.go:89] found id: ""
	I0704 00:12:25.845046   62670 logs.go:276] 0 containers: []
	W0704 00:12:25.845057   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:25.845067   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:25.845089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:25.898536   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:25.898570   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:25.913300   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:25.913330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:25.987372   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:25.987390   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:25.987402   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:26.073931   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:26.073982   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:25.824395   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.324952   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:26.162199   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.662302   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.246148   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:30.247149   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:28.621179   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:28.634247   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:28.634321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:28.672433   62670 cri.go:89] found id: ""
	I0704 00:12:28.672458   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.672467   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:28.672473   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:28.672522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:28.712000   62670 cri.go:89] found id: ""
	I0704 00:12:28.712036   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.712049   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:28.712059   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:28.712126   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:28.751170   62670 cri.go:89] found id: ""
	I0704 00:12:28.751202   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.751213   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:28.751222   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:28.751283   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:28.788015   62670 cri.go:89] found id: ""
	I0704 00:12:28.788050   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.788062   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:28.788071   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:28.788141   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:28.826467   62670 cri.go:89] found id: ""
	I0704 00:12:28.826501   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.826511   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:28.826518   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:28.826578   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:28.864375   62670 cri.go:89] found id: ""
	I0704 00:12:28.864397   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.864403   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:28.864408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:28.864461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:28.900137   62670 cri.go:89] found id: ""
	I0704 00:12:28.900160   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.900167   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:28.900173   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:28.900220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:28.934865   62670 cri.go:89] found id: ""
	I0704 00:12:28.934886   62670 logs.go:276] 0 containers: []
	W0704 00:12:28.934894   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:28.934902   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:28.934914   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:28.984100   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:28.984136   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:29.000311   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:29.000340   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:29.083272   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:29.083304   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:29.083318   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:29.164613   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:29.164644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:31.711402   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:31.725076   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:31.725134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:31.763088   62670 cri.go:89] found id: ""
	I0704 00:12:31.763111   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.763120   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:31.763127   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:31.763197   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:31.800920   62670 cri.go:89] found id: ""
	I0704 00:12:31.800942   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.800952   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:31.800958   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:31.801001   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:31.840841   62670 cri.go:89] found id: ""
	I0704 00:12:31.840872   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.840889   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:31.840897   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:31.840956   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:31.883757   62670 cri.go:89] found id: ""
	I0704 00:12:31.883784   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.883792   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:31.883797   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:31.883855   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:31.922234   62670 cri.go:89] found id: ""
	I0704 00:12:31.922261   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.922270   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:31.922275   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:31.922323   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:31.959691   62670 cri.go:89] found id: ""
	I0704 00:12:31.959717   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.959725   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:31.959731   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:31.959789   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:31.997069   62670 cri.go:89] found id: ""
	I0704 00:12:31.997098   62670 logs.go:276] 0 containers: []
	W0704 00:12:31.997106   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:31.997112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:31.997182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:32.032437   62670 cri.go:89] found id: ""
	I0704 00:12:32.032475   62670 logs.go:276] 0 containers: []
	W0704 00:12:32.032484   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:32.032495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:32.032510   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:32.046791   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:32.046823   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:32.118482   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:32.118506   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:32.118519   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:32.206600   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:32.206638   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:32.249940   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:32.249967   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:30.823529   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.322802   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:33.161603   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:35.162213   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:32.746670   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.746760   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.245283   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:34.808364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:34.822973   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:34.823039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:34.859617   62670 cri.go:89] found id: ""
	I0704 00:12:34.859640   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.859649   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:34.859654   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:34.859703   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:34.899724   62670 cri.go:89] found id: ""
	I0704 00:12:34.899752   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.899762   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:34.899768   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:34.899830   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:34.939063   62670 cri.go:89] found id: ""
	I0704 00:12:34.939090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.939098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:34.939104   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:34.939185   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:34.979062   62670 cri.go:89] found id: ""
	I0704 00:12:34.979090   62670 logs.go:276] 0 containers: []
	W0704 00:12:34.979101   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:34.979108   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:34.979168   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:35.019580   62670 cri.go:89] found id: ""
	I0704 00:12:35.019613   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.019621   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:35.019626   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:35.019674   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:35.064364   62670 cri.go:89] found id: ""
	I0704 00:12:35.064391   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.064399   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:35.064404   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:35.064463   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:35.105004   62670 cri.go:89] found id: ""
	I0704 00:12:35.105032   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.105040   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:35.105046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:35.105101   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:35.143656   62670 cri.go:89] found id: ""
	I0704 00:12:35.143681   62670 logs.go:276] 0 containers: []
	W0704 00:12:35.143689   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:35.143698   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:35.143709   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:35.203016   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:35.203050   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:35.218808   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:35.218840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:35.298247   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:35.298269   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:35.298284   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:35.376425   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:35.376463   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:37.918592   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:37.932291   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:37.932370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:37.967657   62670 cri.go:89] found id: ""
	I0704 00:12:37.967680   62670 logs.go:276] 0 containers: []
	W0704 00:12:37.967688   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:37.967694   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:37.967740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:38.005522   62670 cri.go:89] found id: ""
	I0704 00:12:38.005557   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.005569   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:38.005576   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:38.005634   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:38.043475   62670 cri.go:89] found id: ""
	I0704 00:12:38.043505   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.043516   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:38.043524   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:38.043589   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:38.080520   62670 cri.go:89] found id: ""
	I0704 00:12:38.080548   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.080557   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:38.080563   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:38.080612   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:38.116292   62670 cri.go:89] found id: ""
	I0704 00:12:38.116322   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.116332   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:38.116338   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:38.116404   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:38.158430   62670 cri.go:89] found id: ""
	I0704 00:12:38.158468   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.158480   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:38.158489   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:38.158567   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:38.198119   62670 cri.go:89] found id: ""
	I0704 00:12:38.198150   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.198162   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:38.198172   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:38.198253   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:38.235757   62670 cri.go:89] found id: ""
	I0704 00:12:38.235784   62670 logs.go:276] 0 containers: []
	W0704 00:12:38.235792   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:38.235800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:38.235811   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:12:35.324339   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.325301   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:37.162347   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.162620   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:39.246064   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.745179   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:12:38.329002   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:38.329026   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:38.329041   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:38.414451   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:38.414492   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:38.461058   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:38.461089   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:38.518574   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:38.518609   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.051653   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:41.066287   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:41.066364   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:41.106709   62670 cri.go:89] found id: ""
	I0704 00:12:41.106733   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.106747   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:41.106753   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:41.106815   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:41.144371   62670 cri.go:89] found id: ""
	I0704 00:12:41.144399   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.144410   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:41.144417   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:41.144491   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:41.183690   62670 cri.go:89] found id: ""
	I0704 00:12:41.183717   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.183727   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:41.183734   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:41.183818   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:41.219744   62670 cri.go:89] found id: ""
	I0704 00:12:41.219767   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.219777   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:41.219790   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:41.219850   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:41.259070   62670 cri.go:89] found id: ""
	I0704 00:12:41.259091   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.259098   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:41.259103   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:41.259162   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:41.297956   62670 cri.go:89] found id: ""
	I0704 00:12:41.297987   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.297995   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:41.298001   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:41.298061   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:41.335521   62670 cri.go:89] found id: ""
	I0704 00:12:41.335599   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.335616   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:41.335624   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:41.335688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:41.374777   62670 cri.go:89] found id: ""
	I0704 00:12:41.374817   62670 logs.go:276] 0 containers: []
	W0704 00:12:41.374838   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:41.374848   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:41.374868   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:41.426282   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:41.426324   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:41.441309   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:41.441342   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:41.518350   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:41.518373   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:41.518395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:41.596426   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:41.596467   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:39.824742   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:42.323920   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:41.162829   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.662181   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.662641   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:43.745586   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:45.747024   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:44.139291   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:44.152300   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:44.152370   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:44.194350   62670 cri.go:89] found id: ""
	I0704 00:12:44.194380   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.194394   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:44.194401   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:44.194470   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:44.229630   62670 cri.go:89] found id: ""
	I0704 00:12:44.229657   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.229666   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:44.229671   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:44.229724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:44.271235   62670 cri.go:89] found id: ""
	I0704 00:12:44.271260   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.271269   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:44.271276   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:44.271342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:44.336464   62670 cri.go:89] found id: ""
	I0704 00:12:44.336499   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.336509   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:44.336523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:44.336579   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:44.379482   62670 cri.go:89] found id: ""
	I0704 00:12:44.379513   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.379524   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:44.379530   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:44.379594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:44.417234   62670 cri.go:89] found id: ""
	I0704 00:12:44.417267   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.417278   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:44.417285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:44.417345   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:44.454222   62670 cri.go:89] found id: ""
	I0704 00:12:44.454249   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.454259   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:44.454266   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:44.454328   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:44.491999   62670 cri.go:89] found id: ""
	I0704 00:12:44.492028   62670 logs.go:276] 0 containers: []
	W0704 00:12:44.492039   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:44.492050   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:44.492065   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:44.543261   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:44.543298   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:44.558348   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:44.558378   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:44.640786   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:44.640805   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:44.640820   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:44.727870   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:44.727945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:47.274461   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:47.288930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:47.288995   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:47.329153   62670 cri.go:89] found id: ""
	I0704 00:12:47.329178   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.329189   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:47.329195   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:47.329262   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:47.366786   62670 cri.go:89] found id: ""
	I0704 00:12:47.366814   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.366825   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:47.366832   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:47.366900   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:47.404048   62670 cri.go:89] found id: ""
	I0704 00:12:47.404089   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.404098   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:47.404106   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:47.404170   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:47.440298   62670 cri.go:89] found id: ""
	I0704 00:12:47.440329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.440341   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:47.440348   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:47.440408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:47.478297   62670 cri.go:89] found id: ""
	I0704 00:12:47.478329   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.478340   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:47.478347   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:47.478406   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:47.514114   62670 cri.go:89] found id: ""
	I0704 00:12:47.514143   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.514152   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:47.514158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:47.514221   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:47.558404   62670 cri.go:89] found id: ""
	I0704 00:12:47.558437   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.558449   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:47.558456   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:47.558519   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:47.602782   62670 cri.go:89] found id: ""
	I0704 00:12:47.602824   62670 logs.go:276] 0 containers: []
	W0704 00:12:47.602834   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:47.602845   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:47.602860   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:47.655514   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:47.655556   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:47.672807   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:47.672844   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:47.763562   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:47.763583   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:47.763596   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:47.852498   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:47.852542   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:44.822923   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:46.824707   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.162671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.664606   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:48.247464   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.747846   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:50.400046   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:50.413559   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:50.413621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:50.450898   62670 cri.go:89] found id: ""
	I0704 00:12:50.450927   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.450938   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:50.450948   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:50.451002   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:50.487786   62670 cri.go:89] found id: ""
	I0704 00:12:50.487822   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.487832   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:50.487838   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:50.487923   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:50.525298   62670 cri.go:89] found id: ""
	I0704 00:12:50.525324   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.525334   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:50.525343   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:50.525409   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:50.563742   62670 cri.go:89] found id: ""
	I0704 00:12:50.563767   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.563775   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:50.563782   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:50.563839   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:50.600977   62670 cri.go:89] found id: ""
	I0704 00:12:50.601011   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.601023   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:50.601031   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:50.601105   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:50.637489   62670 cri.go:89] found id: ""
	I0704 00:12:50.637517   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.637527   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:50.637534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:50.637594   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:50.684342   62670 cri.go:89] found id: ""
	I0704 00:12:50.684371   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.684381   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:50.684389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:50.684572   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:50.743111   62670 cri.go:89] found id: ""
	I0704 00:12:50.743143   62670 logs.go:276] 0 containers: []
	W0704 00:12:50.743153   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:50.743163   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:50.743177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:50.806436   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:50.806482   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:50.823559   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:50.823594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:50.892600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:50.892629   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:50.892642   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:50.969817   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:50.969851   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:49.323144   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:51.822264   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.824409   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.161649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.163049   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.245597   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:55.746766   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:53.512548   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:53.525835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:53.525903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:53.563303   62670 cri.go:89] found id: ""
	I0704 00:12:53.563335   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.563349   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:53.563356   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:53.563410   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:53.602687   62670 cri.go:89] found id: ""
	I0704 00:12:53.602720   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.602731   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:53.602739   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:53.602797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:53.638109   62670 cri.go:89] found id: ""
	I0704 00:12:53.638141   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.638150   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:53.638158   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:53.638220   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:53.678073   62670 cri.go:89] found id: ""
	I0704 00:12:53.678096   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.678106   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:53.678114   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:53.678172   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:53.713995   62670 cri.go:89] found id: ""
	I0704 00:12:53.714028   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.714041   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:53.714049   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:53.714108   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:53.751761   62670 cri.go:89] found id: ""
	I0704 00:12:53.751783   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.751790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:53.751796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:53.751856   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:53.792662   62670 cri.go:89] found id: ""
	I0704 00:12:53.792692   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.792703   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:53.792710   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:53.792769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:53.833970   62670 cri.go:89] found id: ""
	I0704 00:12:53.833999   62670 logs.go:276] 0 containers: []
	W0704 00:12:53.834010   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:53.834021   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:53.834040   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:53.918330   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:53.918363   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:53.918380   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:53.999491   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:53.999524   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:54.042415   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:54.042451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:54.096427   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:54.096466   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.611252   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:56.624364   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:56.624427   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:56.662953   62670 cri.go:89] found id: ""
	I0704 00:12:56.662971   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.662978   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:56.662983   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:56.663035   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:56.700093   62670 cri.go:89] found id: ""
	I0704 00:12:56.700125   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.700136   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:56.700144   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:56.700209   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:56.737358   62670 cri.go:89] found id: ""
	I0704 00:12:56.737395   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.737405   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:56.737412   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:56.737479   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:56.772625   62670 cri.go:89] found id: ""
	I0704 00:12:56.772652   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.772663   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:56.772671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:56.772731   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:56.810693   62670 cri.go:89] found id: ""
	I0704 00:12:56.810722   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.810731   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:56.810736   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:56.810787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:56.851646   62670 cri.go:89] found id: ""
	I0704 00:12:56.851671   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.851678   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:56.851684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:56.851733   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:56.894196   62670 cri.go:89] found id: ""
	I0704 00:12:56.894230   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.894240   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:56.894246   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:56.894302   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:12:56.935029   62670 cri.go:89] found id: ""
	I0704 00:12:56.935054   62670 logs.go:276] 0 containers: []
	W0704 00:12:56.935062   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:12:56.935072   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:12:56.935088   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:12:57.017630   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:12:57.017658   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:12:57.017675   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:12:57.103861   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:12:57.103916   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:12:57.147466   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:12:57.147497   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:12:57.199798   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:12:57.199836   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:12:56.325738   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.822885   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:57.166306   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.663207   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:58.245373   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:00.246495   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:12:59.716709   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:12:59.731778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:12:59.731849   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:12:59.770210   62670 cri.go:89] found id: ""
	I0704 00:12:59.770241   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.770249   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:12:59.770259   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:12:59.770319   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:12:59.816446   62670 cri.go:89] found id: ""
	I0704 00:12:59.816473   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.816483   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:12:59.816490   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:12:59.816570   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:12:59.854879   62670 cri.go:89] found id: ""
	I0704 00:12:59.854910   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.854921   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:12:59.854928   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:12:59.854978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:12:59.891370   62670 cri.go:89] found id: ""
	I0704 00:12:59.891394   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.891401   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:12:59.891407   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:12:59.891467   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:12:59.926067   62670 cri.go:89] found id: ""
	I0704 00:12:59.926089   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.926096   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:12:59.926102   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:12:59.926158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:12:59.961646   62670 cri.go:89] found id: ""
	I0704 00:12:59.961674   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.961685   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:12:59.961692   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:12:59.961770   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:12:59.998290   62670 cri.go:89] found id: ""
	I0704 00:12:59.998322   62670 logs.go:276] 0 containers: []
	W0704 00:12:59.998333   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:12:59.998342   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:12:59.998408   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:00.035410   62670 cri.go:89] found id: ""
	I0704 00:13:00.035438   62670 logs.go:276] 0 containers: []
	W0704 00:13:00.035446   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:00.035455   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:00.035471   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:00.090614   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:00.090655   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:00.105228   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:00.105265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:00.188082   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:00.188121   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:00.188139   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:00.275656   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:00.275702   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:02.823447   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:02.837684   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:02.837745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:02.875275   62670 cri.go:89] found id: ""
	I0704 00:13:02.875314   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.875324   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:02.875339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:02.875399   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:02.910681   62670 cri.go:89] found id: ""
	I0704 00:13:02.910715   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.910727   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:02.910735   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:02.910797   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:02.948937   62670 cri.go:89] found id: ""
	I0704 00:13:02.948963   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.948972   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:02.948979   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:02.949039   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:02.984232   62670 cri.go:89] found id: ""
	I0704 00:13:02.984259   62670 logs.go:276] 0 containers: []
	W0704 00:13:02.984267   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:02.984271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:02.984321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:03.021493   62670 cri.go:89] found id: ""
	I0704 00:13:03.021517   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.021525   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:03.021534   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:03.021583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:03.058829   62670 cri.go:89] found id: ""
	I0704 00:13:03.058860   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.058870   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:03.058877   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:03.058944   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:03.104195   62670 cri.go:89] found id: ""
	I0704 00:13:03.104225   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.104234   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:03.104242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:03.104303   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:03.140913   62670 cri.go:89] found id: ""
	I0704 00:13:03.140941   62670 logs.go:276] 0 containers: []
	W0704 00:13:03.140951   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:03.140961   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:03.140976   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:03.194901   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:03.194945   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:03.209366   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:03.209395   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:13:01.322711   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:03.323610   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.161800   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:04.162195   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:02.746479   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:05.245132   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:07.245877   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	W0704 00:13:03.292892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:03.292916   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:03.292934   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:03.369764   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:03.369800   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:05.917514   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:05.931529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:05.931592   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:05.976164   62670 cri.go:89] found id: ""
	I0704 00:13:05.976186   62670 logs.go:276] 0 containers: []
	W0704 00:13:05.976193   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:05.976199   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:05.976258   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:06.013568   62670 cri.go:89] found id: ""
	I0704 00:13:06.013593   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.013602   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:06.013609   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:06.013678   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:06.050848   62670 cri.go:89] found id: ""
	I0704 00:13:06.050886   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.050894   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:06.050900   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:06.050958   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:06.090919   62670 cri.go:89] found id: ""
	I0704 00:13:06.090945   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.090956   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:06.090967   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:06.091016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:06.129210   62670 cri.go:89] found id: ""
	I0704 00:13:06.129237   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.129246   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:06.129252   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:06.129304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:06.166777   62670 cri.go:89] found id: ""
	I0704 00:13:06.166801   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.166809   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:06.166817   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:06.166878   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:06.204900   62670 cri.go:89] found id: ""
	I0704 00:13:06.204929   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.204940   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:06.204947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:06.205008   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:06.244196   62670 cri.go:89] found id: ""
	I0704 00:13:06.244274   62670 logs.go:276] 0 containers: []
	W0704 00:13:06.244291   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:06.244301   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:06.244317   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:06.258834   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:06.258873   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:06.339126   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:06.339151   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:06.339165   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:06.416220   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:06.416265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:06.458188   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:06.458221   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:05.824313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.323361   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:06.162328   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:08.666333   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.248287   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.746215   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:09.014816   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:09.028957   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:09.029021   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:09.072427   62670 cri.go:89] found id: ""
	I0704 00:13:09.072455   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.072465   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:09.072472   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:09.072529   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:09.109630   62670 cri.go:89] found id: ""
	I0704 00:13:09.109660   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.109669   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:09.109675   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:09.109724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:09.152873   62670 cri.go:89] found id: ""
	I0704 00:13:09.152901   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.152911   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:09.152918   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:09.152976   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:09.189390   62670 cri.go:89] found id: ""
	I0704 00:13:09.189421   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.189431   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:09.189446   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:09.189515   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:09.227335   62670 cri.go:89] found id: ""
	I0704 00:13:09.227364   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.227375   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:09.227382   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:09.227444   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:09.269157   62670 cri.go:89] found id: ""
	I0704 00:13:09.269189   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.269201   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:09.269208   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:09.269259   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:09.317222   62670 cri.go:89] found id: ""
	I0704 00:13:09.317249   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.317257   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:09.317263   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:09.317324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:09.355578   62670 cri.go:89] found id: ""
	I0704 00:13:09.355610   62670 logs.go:276] 0 containers: []
	W0704 00:13:09.355618   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:09.355626   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:09.355637   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:09.396279   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:09.396316   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:09.451358   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:09.451398   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:09.466565   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:09.466599   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:09.545001   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:09.545043   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:09.545066   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.124211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:12.139131   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:12.139229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:12.178690   62670 cri.go:89] found id: ""
	I0704 00:13:12.178719   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.178726   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:12.178732   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:12.178783   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:12.215470   62670 cri.go:89] found id: ""
	I0704 00:13:12.215511   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.215524   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:12.215533   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:12.215620   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:12.256615   62670 cri.go:89] found id: ""
	I0704 00:13:12.256667   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.256682   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:12.256688   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:12.256740   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:12.298606   62670 cri.go:89] found id: ""
	I0704 00:13:12.298631   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.298643   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:12.298650   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:12.298730   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:12.338152   62670 cri.go:89] found id: ""
	I0704 00:13:12.338180   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.338192   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:12.338199   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:12.338260   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:12.377003   62670 cri.go:89] found id: ""
	I0704 00:13:12.377029   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.377040   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:12.377046   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:12.377095   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:12.412239   62670 cri.go:89] found id: ""
	I0704 00:13:12.412268   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.412278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:12.412285   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:12.412361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:12.451054   62670 cri.go:89] found id: ""
	I0704 00:13:12.451079   62670 logs.go:276] 0 containers: []
	W0704 00:13:12.451086   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:12.451094   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:12.451111   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:12.506178   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:12.506216   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:12.520563   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:12.520594   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:12.594417   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:12.594439   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:12.594455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:12.671131   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:12.671179   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:10.323629   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:12.823056   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:11.161399   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.162943   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.661962   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:13.749962   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:16.247931   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:15.225840   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:15.239346   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:15.239420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:15.276618   62670 cri.go:89] found id: ""
	I0704 00:13:15.276649   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.276661   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:15.276668   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:15.276751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:15.312585   62670 cri.go:89] found id: ""
	I0704 00:13:15.312615   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.312625   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:15.312632   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:15.312693   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:15.351354   62670 cri.go:89] found id: ""
	I0704 00:13:15.351382   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.351392   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:15.351399   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:15.351457   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:15.388660   62670 cri.go:89] found id: ""
	I0704 00:13:15.388690   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.388701   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:15.388708   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:15.388769   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:15.427524   62670 cri.go:89] found id: ""
	I0704 00:13:15.427553   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.427564   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:15.427572   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:15.427636   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:15.463703   62670 cri.go:89] found id: ""
	I0704 00:13:15.463737   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.463752   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:15.463761   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:15.463825   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:15.498640   62670 cri.go:89] found id: ""
	I0704 00:13:15.498664   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.498672   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:15.498676   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:15.498727   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:15.534655   62670 cri.go:89] found id: ""
	I0704 00:13:15.534679   62670 logs.go:276] 0 containers: []
	W0704 00:13:15.534690   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:15.534700   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:15.534715   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:15.586051   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:15.586083   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:15.600930   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:15.600958   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:15.670393   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:15.670420   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:15.670435   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:15.749644   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:15.749678   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:15.324591   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.822616   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:17.662630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.162230   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.746045   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:20.746946   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:18.298689   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:18.312408   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:18.312475   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:18.353509   62670 cri.go:89] found id: ""
	I0704 00:13:18.353538   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.353549   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:18.353557   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:18.353642   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:18.394463   62670 cri.go:89] found id: ""
	I0704 00:13:18.394486   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.394493   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:18.394498   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:18.394550   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:18.433254   62670 cri.go:89] found id: ""
	I0704 00:13:18.433288   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.433297   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:18.433303   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:18.433350   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:18.473369   62670 cri.go:89] found id: ""
	I0704 00:13:18.473395   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.473404   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:18.473414   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:18.473464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:18.513401   62670 cri.go:89] found id: ""
	I0704 00:13:18.513436   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.513444   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:18.513450   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:18.513499   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:18.552462   62670 cri.go:89] found id: ""
	I0704 00:13:18.552493   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.552502   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:18.552511   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:18.552569   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:18.591368   62670 cri.go:89] found id: ""
	I0704 00:13:18.591389   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.591398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:18.591406   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:18.591471   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:18.630381   62670 cri.go:89] found id: ""
	I0704 00:13:18.630413   62670 logs.go:276] 0 containers: []
	W0704 00:13:18.630424   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:18.630435   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:18.630451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:18.684868   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:18.684902   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:18.700897   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:18.700921   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:18.794507   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:18.794524   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:18.794535   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:18.879415   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:18.879457   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.429432   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:21.443906   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:21.443978   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:21.482487   62670 cri.go:89] found id: ""
	I0704 00:13:21.482516   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.482528   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:21.482535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:21.482583   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:21.519170   62670 cri.go:89] found id: ""
	I0704 00:13:21.519206   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.519214   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:21.519219   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:21.519265   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:21.558340   62670 cri.go:89] found id: ""
	I0704 00:13:21.558367   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.558390   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:21.558397   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:21.558465   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:21.595347   62670 cri.go:89] found id: ""
	I0704 00:13:21.595372   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.595382   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:21.595390   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:21.595464   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:21.634524   62670 cri.go:89] found id: ""
	I0704 00:13:21.634547   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.634555   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:21.634560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:21.634622   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:21.672529   62670 cri.go:89] found id: ""
	I0704 00:13:21.672558   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.672566   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:21.672571   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:21.672617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:21.711114   62670 cri.go:89] found id: ""
	I0704 00:13:21.711145   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.711156   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:21.711163   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:21.711248   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:21.747087   62670 cri.go:89] found id: ""
	I0704 00:13:21.747126   62670 logs.go:276] 0 containers: []
	W0704 00:13:21.747135   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:21.747145   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:21.747162   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:21.832897   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:21.832919   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:21.832935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:21.915969   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:21.916008   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:21.957922   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:21.957950   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:22.009881   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:22.009925   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:19.823109   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.322313   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.163190   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.664612   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:22.747918   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:25.245707   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:24.526106   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:24.548431   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:24.548493   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:24.582887   62670 cri.go:89] found id: ""
	I0704 00:13:24.582925   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.582935   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:24.582940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:24.582992   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:24.621339   62670 cri.go:89] found id: ""
	I0704 00:13:24.621365   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.621375   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:24.621380   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:24.621433   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:24.658124   62670 cri.go:89] found id: ""
	I0704 00:13:24.658152   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.658163   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:24.658170   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:24.658239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:24.697509   62670 cri.go:89] found id: ""
	I0704 00:13:24.697539   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.697546   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:24.697552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:24.697599   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:24.734523   62670 cri.go:89] found id: ""
	I0704 00:13:24.734547   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.734554   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:24.734560   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:24.734608   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:24.773351   62670 cri.go:89] found id: ""
	I0704 00:13:24.773375   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.773383   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:24.773389   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:24.773439   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:24.810855   62670 cri.go:89] found id: ""
	I0704 00:13:24.810888   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.810898   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:24.810905   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:24.810962   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:24.849989   62670 cri.go:89] found id: ""
	I0704 00:13:24.850017   62670 logs.go:276] 0 containers: []
	W0704 00:13:24.850027   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:24.850039   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:24.850053   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:24.904308   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:24.904344   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:24.920143   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:24.920234   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:24.995138   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:24.995163   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:24.995177   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:25.070407   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:25.070449   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:27.611749   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:27.625292   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:27.625349   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:27.663239   62670 cri.go:89] found id: ""
	I0704 00:13:27.663263   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.663274   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:27.663281   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:27.663337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:27.704354   62670 cri.go:89] found id: ""
	I0704 00:13:27.704378   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.704392   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:27.704399   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:27.704473   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:27.742585   62670 cri.go:89] found id: ""
	I0704 00:13:27.742619   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.742630   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:27.742637   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:27.742695   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:27.791650   62670 cri.go:89] found id: ""
	I0704 00:13:27.791678   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.791686   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:27.791691   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:27.791751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:27.832724   62670 cri.go:89] found id: ""
	I0704 00:13:27.832757   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.832770   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:27.832778   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:27.832865   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:27.875054   62670 cri.go:89] found id: ""
	I0704 00:13:27.875081   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.875089   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:27.875095   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:27.875142   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:27.909819   62670 cri.go:89] found id: ""
	I0704 00:13:27.909844   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.909851   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:27.909856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:27.909903   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:27.944882   62670 cri.go:89] found id: ""
	I0704 00:13:27.944907   62670 logs.go:276] 0 containers: []
	W0704 00:13:27.944916   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:27.944923   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:27.944936   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:28.004233   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:28.004271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:28.020800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:28.020834   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:28.096186   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:28.096213   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:28.096231   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:28.178611   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:28.178648   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:24.322656   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:26.323972   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:28.821944   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.161806   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:29.661580   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:27.748284   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.246840   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:30.729354   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:30.744298   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:30.744361   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:30.783053   62670 cri.go:89] found id: ""
	I0704 00:13:30.783081   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.783089   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:30.783095   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:30.783151   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:30.820728   62670 cri.go:89] found id: ""
	I0704 00:13:30.820756   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.820765   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:30.820770   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:30.820834   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:30.858188   62670 cri.go:89] found id: ""
	I0704 00:13:30.858221   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.858234   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:30.858242   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:30.858307   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:30.899024   62670 cri.go:89] found id: ""
	I0704 00:13:30.899049   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.899056   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:30.899062   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:30.899109   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:30.942431   62670 cri.go:89] found id: ""
	I0704 00:13:30.942461   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.942471   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:30.942479   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:30.942534   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:30.995371   62670 cri.go:89] found id: ""
	I0704 00:13:30.995402   62670 logs.go:276] 0 containers: []
	W0704 00:13:30.995417   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:30.995425   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:30.995486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:31.043485   62670 cri.go:89] found id: ""
	I0704 00:13:31.043516   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.043524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:31.043529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:31.043576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:31.082408   62670 cri.go:89] found id: ""
	I0704 00:13:31.082440   62670 logs.go:276] 0 containers: []
	W0704 00:13:31.082451   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:31.082463   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:31.082477   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:31.096800   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:31.096824   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:31.169116   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:31.169142   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:31.169168   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:31.250199   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:31.250230   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:31.293706   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:31.293737   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:30.822968   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.322607   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:31.661811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.661872   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.662906   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:32.746786   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:35.246989   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:33.845361   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:33.859495   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:33.859586   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:33.900578   62670 cri.go:89] found id: ""
	I0704 00:13:33.900608   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.900616   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:33.900621   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:33.900668   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:33.934659   62670 cri.go:89] found id: ""
	I0704 00:13:33.934681   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.934688   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:33.934699   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:33.934745   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:33.977141   62670 cri.go:89] found id: ""
	I0704 00:13:33.977166   62670 logs.go:276] 0 containers: []
	W0704 00:13:33.977174   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:33.977179   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:33.977230   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:34.013515   62670 cri.go:89] found id: ""
	I0704 00:13:34.013540   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.013548   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:34.013553   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:34.013600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:34.059663   62670 cri.go:89] found id: ""
	I0704 00:13:34.059690   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.059698   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:34.059703   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:34.059765   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:34.094002   62670 cri.go:89] found id: ""
	I0704 00:13:34.094030   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.094038   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:34.094044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:34.094090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:34.130278   62670 cri.go:89] found id: ""
	I0704 00:13:34.130310   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.130322   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:34.130330   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:34.130401   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:34.173531   62670 cri.go:89] found id: ""
	I0704 00:13:34.173557   62670 logs.go:276] 0 containers: []
	W0704 00:13:34.173563   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:34.173570   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:34.173582   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:34.229273   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:34.229334   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:34.247043   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:34.247073   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:34.322892   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:34.322920   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:34.322935   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:34.409230   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:34.409271   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:36.950627   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:36.969997   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:36.970063   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:37.027934   62670 cri.go:89] found id: ""
	I0704 00:13:37.027964   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.027975   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:37.027982   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:37.028069   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:37.067668   62670 cri.go:89] found id: ""
	I0704 00:13:37.067696   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.067706   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:37.067713   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:37.067774   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:37.104762   62670 cri.go:89] found id: ""
	I0704 00:13:37.104798   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.104806   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:37.104812   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:37.104882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:37.143887   62670 cri.go:89] found id: ""
	I0704 00:13:37.143913   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.143921   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:37.143936   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:37.143999   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:37.182605   62670 cri.go:89] found id: ""
	I0704 00:13:37.182629   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.182636   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:37.182641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:37.182697   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:37.219884   62670 cri.go:89] found id: ""
	I0704 00:13:37.219914   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.219924   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:37.219931   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:37.219996   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:37.259122   62670 cri.go:89] found id: ""
	I0704 00:13:37.259146   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.259154   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:37.259159   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:37.259205   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:37.296218   62670 cri.go:89] found id: ""
	I0704 00:13:37.296255   62670 logs.go:276] 0 containers: []
	W0704 00:13:37.296262   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:37.296270   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:37.296282   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:37.349495   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:37.349540   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:37.364224   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:37.364255   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:37.437604   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:37.437627   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:37.437644   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:37.524096   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:37.524150   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:35.823323   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.323653   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:38.164076   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.662318   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:37.745470   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:39.746119   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:41.747887   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:40.067394   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:40.081728   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:40.081787   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:40.119102   62670 cri.go:89] found id: ""
	I0704 00:13:40.119129   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.119137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:40.119142   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:40.119195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.161432   62670 cri.go:89] found id: ""
	I0704 00:13:40.161468   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.161477   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:40.161483   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:40.161542   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:40.196487   62670 cri.go:89] found id: ""
	I0704 00:13:40.196526   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.196534   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:40.196540   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:40.196591   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:40.232218   62670 cri.go:89] found id: ""
	I0704 00:13:40.232245   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.232253   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:40.232259   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:40.232306   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:40.272962   62670 cri.go:89] found id: ""
	I0704 00:13:40.272995   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.273007   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:40.273016   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:40.273079   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:40.311622   62670 cri.go:89] found id: ""
	I0704 00:13:40.311651   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.311662   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:40.311671   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:40.311737   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:40.353486   62670 cri.go:89] found id: ""
	I0704 00:13:40.353516   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.353524   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:40.353529   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:40.353576   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:40.391269   62670 cri.go:89] found id: ""
	I0704 00:13:40.391299   62670 logs.go:276] 0 containers: []
	W0704 00:13:40.391308   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:40.391318   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:40.391330   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:40.445011   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:40.445048   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:40.458982   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:40.459010   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:40.533102   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:40.533127   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:40.533140   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:40.618189   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:40.618228   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:43.162352   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:43.177336   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:43.177419   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:43.221099   62670 cri.go:89] found id: ""
	I0704 00:13:43.221127   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.221137   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:43.221144   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:43.221211   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:40.324554   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.822608   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:42.662723   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:45.162037   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:44.245991   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:46.746635   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:43.268528   62670 cri.go:89] found id: ""
	I0704 00:13:43.268557   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.268568   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:43.268575   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:43.268638   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:43.304343   62670 cri.go:89] found id: ""
	I0704 00:13:43.304373   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.304384   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:43.304391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:43.304459   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:43.346128   62670 cri.go:89] found id: ""
	I0704 00:13:43.346163   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.346179   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:43.346187   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:43.346251   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:43.392622   62670 cri.go:89] found id: ""
	I0704 00:13:43.392652   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.392662   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:43.392673   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:43.392764   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:43.438725   62670 cri.go:89] found id: ""
	I0704 00:13:43.438751   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.438760   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:43.438766   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:43.438812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:43.480356   62670 cri.go:89] found id: ""
	I0704 00:13:43.480378   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.480386   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:43.480391   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:43.480441   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:43.516551   62670 cri.go:89] found id: ""
	I0704 00:13:43.516576   62670 logs.go:276] 0 containers: []
	W0704 00:13:43.516583   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:43.516591   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:43.516606   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:43.567568   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:43.567604   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:43.583140   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:43.583173   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:43.658841   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:43.658870   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:43.658885   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:43.737379   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:43.737419   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:46.281048   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:46.295088   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:46.295158   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:46.333107   62670 cri.go:89] found id: ""
	I0704 00:13:46.333135   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.333168   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:46.333177   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:46.333263   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:46.376375   62670 cri.go:89] found id: ""
	I0704 00:13:46.376405   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.376415   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:46.376423   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:46.376486   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:46.410809   62670 cri.go:89] found id: ""
	I0704 00:13:46.410838   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.410848   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:46.410855   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:46.410911   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:46.453114   62670 cri.go:89] found id: ""
	I0704 00:13:46.453143   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.453156   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:46.453164   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:46.453229   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:46.491218   62670 cri.go:89] found id: ""
	I0704 00:13:46.491246   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.491255   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:46.491261   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:46.491320   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:46.528669   62670 cri.go:89] found id: ""
	I0704 00:13:46.528695   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.528706   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:46.528713   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:46.528777   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:46.564289   62670 cri.go:89] found id: ""
	I0704 00:13:46.564317   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.564327   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:46.564333   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:46.564384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:46.600821   62670 cri.go:89] found id: ""
	I0704 00:13:46.600854   62670 logs.go:276] 0 containers: []
	W0704 00:13:46.600864   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:46.600875   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:46.600888   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:46.653816   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:46.653850   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:46.668899   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:46.668927   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:46.751414   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:46.751434   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:46.751455   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:46.831455   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:46.831489   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:44.823478   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.323726   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:47.663375   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:50.162358   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.245272   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:51.745945   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:49.378856   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:49.393930   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:49.393988   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:49.435332   62670 cri.go:89] found id: ""
	I0704 00:13:49.435355   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.435362   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:49.435368   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:49.435415   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:49.476780   62670 cri.go:89] found id: ""
	I0704 00:13:49.476807   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.476815   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:49.476820   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:49.476868   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:49.519347   62670 cri.go:89] found id: ""
	I0704 00:13:49.519379   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.519389   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:49.519396   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:49.519522   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:49.557125   62670 cri.go:89] found id: ""
	I0704 00:13:49.557150   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.557159   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:49.557166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:49.557225   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:49.592843   62670 cri.go:89] found id: ""
	I0704 00:13:49.592883   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.592894   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:49.592901   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:49.592966   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:49.629542   62670 cri.go:89] found id: ""
	I0704 00:13:49.629565   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.629572   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:49.629578   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:49.629630   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:49.667805   62670 cri.go:89] found id: ""
	I0704 00:13:49.667833   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.667844   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:49.667851   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:49.667928   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:49.704446   62670 cri.go:89] found id: ""
	I0704 00:13:49.704472   62670 logs.go:276] 0 containers: []
	W0704 00:13:49.704480   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:49.704494   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:49.704506   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:49.718379   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:49.718403   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:49.791293   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:49.791314   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:49.791329   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:49.870370   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:49.870408   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:49.910508   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:49.910545   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:52.463614   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:52.478642   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:52.478714   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:52.519490   62670 cri.go:89] found id: ""
	I0704 00:13:52.519519   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.519529   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:52.519535   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:52.519686   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:52.561591   62670 cri.go:89] found id: ""
	I0704 00:13:52.561622   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.561632   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:52.561639   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:52.561713   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:52.599169   62670 cri.go:89] found id: ""
	I0704 00:13:52.599196   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.599206   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:52.599212   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:52.599270   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:52.636778   62670 cri.go:89] found id: ""
	I0704 00:13:52.636811   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.636821   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:52.636828   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:52.636893   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:52.675929   62670 cri.go:89] found id: ""
	I0704 00:13:52.675965   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.675977   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:52.675985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:52.676081   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:52.713425   62670 cri.go:89] found id: ""
	I0704 00:13:52.713455   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.713466   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:52.713474   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:52.713541   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:52.750242   62670 cri.go:89] found id: ""
	I0704 00:13:52.750267   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.750278   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:52.750286   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:52.750342   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:52.793247   62670 cri.go:89] found id: ""
	I0704 00:13:52.793277   62670 logs.go:276] 0 containers: []
	W0704 00:13:52.793288   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:52.793298   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:52.793315   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:52.807818   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:52.807970   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:52.886856   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:52.886883   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:52.886903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:52.973510   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:52.973551   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:53.021185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:53.021213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:49.825304   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.322850   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:52.662484   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.662645   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:54.246942   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.745800   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:55.576364   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:55.590796   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:55.590858   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:55.628753   62670 cri.go:89] found id: ""
	I0704 00:13:55.628783   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.628793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:55.628809   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:55.628870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:55.667344   62670 cri.go:89] found id: ""
	I0704 00:13:55.667398   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.667411   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:55.667426   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:55.667496   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:55.705826   62670 cri.go:89] found id: ""
	I0704 00:13:55.705859   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.705870   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:55.705878   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:55.705942   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:55.743204   62670 cri.go:89] found id: ""
	I0704 00:13:55.743231   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.743238   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:55.743244   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:55.743304   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:55.784945   62670 cri.go:89] found id: ""
	I0704 00:13:55.784978   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.784987   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:55.784993   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:55.785044   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:55.825266   62670 cri.go:89] found id: ""
	I0704 00:13:55.825293   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.825304   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:55.825322   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:55.825385   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:55.862235   62670 cri.go:89] found id: ""
	I0704 00:13:55.862269   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.862276   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:55.862282   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:55.862337   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:55.901698   62670 cri.go:89] found id: ""
	I0704 00:13:55.901726   62670 logs.go:276] 0 containers: []
	W0704 00:13:55.901736   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:55.901747   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:55.901762   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:13:55.955322   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:55.955361   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:55.973650   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:55.973689   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:56.049600   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:56.049624   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:56.049640   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:56.133690   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:56.133731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:54.323716   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.324427   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.823837   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:56.663246   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.161652   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:59.249339   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.747759   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:13:58.678014   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:13:58.692780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:13:58.692846   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:13:58.730628   62670 cri.go:89] found id: ""
	I0704 00:13:58.730654   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.730664   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:13:58.730671   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:13:58.730732   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:13:58.772761   62670 cri.go:89] found id: ""
	I0704 00:13:58.772789   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.772800   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:13:58.772806   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:13:58.772871   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:13:58.809591   62670 cri.go:89] found id: ""
	I0704 00:13:58.809623   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.809637   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:13:58.809644   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:13:58.809708   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:13:58.848596   62670 cri.go:89] found id: ""
	I0704 00:13:58.848627   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.848638   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:13:58.848646   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:13:58.848705   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:13:58.888285   62670 cri.go:89] found id: ""
	I0704 00:13:58.888311   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.888318   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:13:58.888323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:13:58.888373   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:13:58.924042   62670 cri.go:89] found id: ""
	I0704 00:13:58.924065   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.924073   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:13:58.924079   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:13:58.924132   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:13:58.963473   62670 cri.go:89] found id: ""
	I0704 00:13:58.963500   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.963510   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:13:58.963516   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:13:58.963581   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:13:58.998757   62670 cri.go:89] found id: ""
	I0704 00:13:58.998788   62670 logs.go:276] 0 containers: []
	W0704 00:13:58.998798   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:13:58.998808   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:13:58.998822   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:13:59.013844   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:13:59.013871   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:13:59.085847   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:13:59.085869   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:13:59.085882   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:13:59.174056   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:13:59.174087   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:13:59.219984   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:13:59.220011   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:01.774436   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:01.790044   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:01.790103   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:01.830337   62670 cri.go:89] found id: ""
	I0704 00:14:01.830366   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.830376   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:01.830383   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:01.830452   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:01.866704   62670 cri.go:89] found id: ""
	I0704 00:14:01.866731   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.866740   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:01.866746   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:01.866796   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:01.906702   62670 cri.go:89] found id: ""
	I0704 00:14:01.906737   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.906748   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:01.906756   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:01.906812   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:01.943348   62670 cri.go:89] found id: ""
	I0704 00:14:01.943381   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.943392   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:01.943400   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:01.943461   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:01.984096   62670 cri.go:89] found id: ""
	I0704 00:14:01.984123   62670 logs.go:276] 0 containers: []
	W0704 00:14:01.984131   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:01.984136   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:01.984182   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:02.021618   62670 cri.go:89] found id: ""
	I0704 00:14:02.021649   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.021659   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:02.021666   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:02.021726   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:02.058976   62670 cri.go:89] found id: ""
	I0704 00:14:02.059000   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.059008   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:02.059013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:02.059064   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:02.097222   62670 cri.go:89] found id: ""
	I0704 00:14:02.097251   62670 logs.go:276] 0 containers: []
	W0704 00:14:02.097258   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:02.097278   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:02.097302   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:02.183349   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:02.183391   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:02.226898   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:02.226928   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:02.286978   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:02.287016   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:02.301361   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:02.301393   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:02.375663   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:01.322516   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.822514   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:01.662003   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:03.665021   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.245713   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.246308   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:04.876515   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:04.891254   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:04.891324   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:04.931465   62670 cri.go:89] found id: ""
	I0704 00:14:04.931488   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.931496   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:04.931501   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:04.931549   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:04.969027   62670 cri.go:89] found id: ""
	I0704 00:14:04.969055   62670 logs.go:276] 0 containers: []
	W0704 00:14:04.969063   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:04.969068   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:04.969122   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:05.006380   62670 cri.go:89] found id: ""
	I0704 00:14:05.006407   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.006423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:05.006430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:05.006494   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:05.043050   62670 cri.go:89] found id: ""
	I0704 00:14:05.043090   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.043105   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:05.043113   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:05.043195   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:05.081549   62670 cri.go:89] found id: ""
	I0704 00:14:05.081575   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.081583   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:05.081588   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:05.081664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:05.126673   62670 cri.go:89] found id: ""
	I0704 00:14:05.126693   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.126700   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:05.126706   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:05.126751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.166832   62670 cri.go:89] found id: ""
	I0704 00:14:05.166856   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.166864   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:05.166872   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:05.166920   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:05.205906   62670 cri.go:89] found id: ""
	I0704 00:14:05.205934   62670 logs.go:276] 0 containers: []
	W0704 00:14:05.205946   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:05.205957   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:05.205973   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:05.260955   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:05.260998   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:05.295937   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:05.295965   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:05.383161   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:05.383188   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:05.383202   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:05.465055   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:05.465100   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:08.007745   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:08.021065   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:08.021134   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:08.061808   62670 cri.go:89] found id: ""
	I0704 00:14:08.061838   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.061848   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:08.061854   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:08.061914   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:08.100542   62670 cri.go:89] found id: ""
	I0704 00:14:08.100573   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.100584   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:08.100592   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:08.100657   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:08.137335   62670 cri.go:89] found id: ""
	I0704 00:14:08.137369   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.137379   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:08.137385   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:08.137455   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:08.177087   62670 cri.go:89] found id: ""
	I0704 00:14:08.177116   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.177124   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:08.177129   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:08.177191   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:08.212652   62670 cri.go:89] found id: ""
	I0704 00:14:08.212686   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.212695   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:08.212701   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:08.212751   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:08.247717   62670 cri.go:89] found id: ""
	I0704 00:14:08.247737   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.247745   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:08.247750   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:08.247805   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:05.824730   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.323006   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:06.160967   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.162407   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.163649   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.247565   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:10.745585   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:08.285525   62670 cri.go:89] found id: ""
	I0704 00:14:08.285556   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.285568   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:08.285576   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:08.285637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:08.325978   62670 cri.go:89] found id: ""
	I0704 00:14:08.326007   62670 logs.go:276] 0 containers: []
	W0704 00:14:08.326017   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:08.326027   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:08.326042   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:08.382407   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:08.382440   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:08.397945   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:08.397979   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:08.468650   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:08.468676   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:08.468691   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:08.543581   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:08.543615   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:11.085683   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:11.102003   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:11.102093   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:11.142561   62670 cri.go:89] found id: ""
	I0704 00:14:11.142589   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.142597   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:11.142602   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:11.142671   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:11.180087   62670 cri.go:89] found id: ""
	I0704 00:14:11.180110   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.180118   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:11.180123   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:11.180202   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:11.220123   62670 cri.go:89] found id: ""
	I0704 00:14:11.220147   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.220173   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:11.220182   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:11.220239   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:11.260418   62670 cri.go:89] found id: ""
	I0704 00:14:11.260445   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.260455   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:11.260462   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:11.260521   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:11.297923   62670 cri.go:89] found id: ""
	I0704 00:14:11.297976   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.297989   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:11.297999   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:11.298083   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:11.335903   62670 cri.go:89] found id: ""
	I0704 00:14:11.335934   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.335945   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:11.335954   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:11.336020   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:11.371965   62670 cri.go:89] found id: ""
	I0704 00:14:11.371997   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.372007   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:11.372013   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:11.372075   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:11.409129   62670 cri.go:89] found id: ""
	I0704 00:14:11.409159   62670 logs.go:276] 0 containers: []
	W0704 00:14:11.409170   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:11.409181   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:11.409194   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:11.464994   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:11.465032   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:11.480084   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:11.480112   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:11.564533   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:11.564560   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:11.564574   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:11.645033   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:11.645068   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:10.323124   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.323251   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.663774   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.161542   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:12.746307   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:15.246158   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:14.195211   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:14.209606   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:14.209660   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:14.252041   62670 cri.go:89] found id: ""
	I0704 00:14:14.252066   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.252081   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:14.252089   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:14.252149   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:14.290619   62670 cri.go:89] found id: ""
	I0704 00:14:14.290647   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.290655   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:14.290660   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:14.290717   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:14.328731   62670 cri.go:89] found id: ""
	I0704 00:14:14.328762   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.328773   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:14.328780   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:14.328842   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:14.370794   62670 cri.go:89] found id: ""
	I0704 00:14:14.370825   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.370835   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:14.370842   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:14.370904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:14.406474   62670 cri.go:89] found id: ""
	I0704 00:14:14.406505   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.406516   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:14.406523   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:14.406582   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:14.442515   62670 cri.go:89] found id: ""
	I0704 00:14:14.442547   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.442558   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:14.442566   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:14.442624   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:14.480798   62670 cri.go:89] found id: ""
	I0704 00:14:14.480827   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.480838   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:14.480844   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:14.480904   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:14.518187   62670 cri.go:89] found id: ""
	I0704 00:14:14.518210   62670 logs.go:276] 0 containers: []
	W0704 00:14:14.518217   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:14.518225   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:14.518236   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:14.572028   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:14.572060   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.586614   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:14.586641   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:14.659339   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:14.659362   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:14.659375   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:14.743802   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:14.743839   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.288666   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:17.304531   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:17.304600   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:17.348705   62670 cri.go:89] found id: ""
	I0704 00:14:17.348730   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.348738   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:17.348749   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:17.348798   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:17.387821   62670 cri.go:89] found id: ""
	I0704 00:14:17.387844   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.387852   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:17.387858   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:17.387934   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:17.425442   62670 cri.go:89] found id: ""
	I0704 00:14:17.425470   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.425480   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:17.425487   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:17.425545   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:17.471216   62670 cri.go:89] found id: ""
	I0704 00:14:17.471243   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.471255   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:17.471262   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:17.471321   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:17.520905   62670 cri.go:89] found id: ""
	I0704 00:14:17.520935   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.520942   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:17.520947   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:17.520997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:17.577627   62670 cri.go:89] found id: ""
	I0704 00:14:17.577648   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.577655   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:17.577661   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:17.577715   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:17.619018   62670 cri.go:89] found id: ""
	I0704 00:14:17.619046   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.619054   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:17.619061   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:17.619124   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:17.664993   62670 cri.go:89] found id: ""
	I0704 00:14:17.665020   62670 logs.go:276] 0 containers: []
	W0704 00:14:17.665029   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:17.665037   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:17.665049   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:17.743823   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:17.743845   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:17.743857   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:17.821339   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:17.821371   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:17.866189   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:17.866226   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:17.919854   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:17.919903   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:14.823677   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:16.825187   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.662772   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.161988   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:17.748067   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.245022   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.246620   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:20.435448   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:20.450556   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:20.450617   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:20.491980   62670 cri.go:89] found id: ""
	I0704 00:14:20.492010   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.492018   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:20.492023   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:20.492071   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:20.532791   62670 cri.go:89] found id: ""
	I0704 00:14:20.532820   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.532829   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:20.532836   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:20.532892   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:20.569604   62670 cri.go:89] found id: ""
	I0704 00:14:20.569628   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.569635   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:20.569641   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:20.569688   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:20.610852   62670 cri.go:89] found id: ""
	I0704 00:14:20.610879   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.610887   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:20.610893   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:20.610950   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:20.648891   62670 cri.go:89] found id: ""
	I0704 00:14:20.648912   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.648920   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:20.648925   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:20.648984   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:20.690273   62670 cri.go:89] found id: ""
	I0704 00:14:20.690304   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.690315   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:20.690323   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:20.690381   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:20.725365   62670 cri.go:89] found id: ""
	I0704 00:14:20.725390   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.725398   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:20.725403   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:20.725478   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:20.768530   62670 cri.go:89] found id: ""
	I0704 00:14:20.768559   62670 logs.go:276] 0 containers: []
	W0704 00:14:20.768569   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:20.768579   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:20.768593   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:20.822896   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:20.822932   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:20.838881   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:20.838912   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:20.921516   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:20.921546   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:20.921560   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:20.999517   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:20.999553   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:19.324790   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:21.822737   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.823039   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:22.162348   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.162631   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:24.745842   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.245280   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:23.545947   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:23.560315   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:23.560397   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:23.602540   62670 cri.go:89] found id: ""
	I0704 00:14:23.602583   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.602596   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:23.602604   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:23.602664   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:23.639529   62670 cri.go:89] found id: ""
	I0704 00:14:23.639560   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.639571   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:23.639579   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:23.639644   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:23.687334   62670 cri.go:89] found id: ""
	I0704 00:14:23.687363   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.687374   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:23.687381   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:23.687450   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:23.728388   62670 cri.go:89] found id: ""
	I0704 00:14:23.728419   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.728427   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:23.728434   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:23.728484   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:23.769903   62670 cri.go:89] found id: ""
	I0704 00:14:23.769933   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.769944   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:23.769956   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:23.770016   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:23.810485   62670 cri.go:89] found id: ""
	I0704 00:14:23.810518   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.810529   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:23.810536   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:23.810621   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:23.854534   62670 cri.go:89] found id: ""
	I0704 00:14:23.854571   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.854582   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:23.854589   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:23.854647   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:23.892229   62670 cri.go:89] found id: ""
	I0704 00:14:23.892257   62670 logs.go:276] 0 containers: []
	W0704 00:14:23.892266   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:23.892278   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:23.892291   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:23.944758   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:23.944793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:23.959115   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:23.959152   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:24.035480   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:24.035501   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:24.035513   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:24.113401   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:24.113447   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:26.655506   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:26.669883   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:26.669964   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:26.705899   62670 cri.go:89] found id: ""
	I0704 00:14:26.705926   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.705934   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:26.705940   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:26.705997   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:26.742991   62670 cri.go:89] found id: ""
	I0704 00:14:26.743016   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.743025   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:26.743031   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:26.743090   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:26.781650   62670 cri.go:89] found id: ""
	I0704 00:14:26.781678   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.781693   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:26.781700   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:26.781760   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:26.816879   62670 cri.go:89] found id: ""
	I0704 00:14:26.816902   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.816909   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:26.816914   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:26.816957   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:26.854271   62670 cri.go:89] found id: ""
	I0704 00:14:26.854301   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.854316   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:26.854324   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:26.854384   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:26.892771   62670 cri.go:89] found id: ""
	I0704 00:14:26.892802   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.892813   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:26.892821   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:26.892880   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:26.931820   62670 cri.go:89] found id: ""
	I0704 00:14:26.931849   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.931859   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:26.931865   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:26.931947   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:26.967633   62670 cri.go:89] found id: ""
	I0704 00:14:26.967659   62670 logs.go:276] 0 containers: []
	W0704 00:14:26.967669   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:26.967679   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:26.967700   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:26.983916   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:26.983951   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:27.063412   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:27.063436   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:27.063451   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:27.147005   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:27.147044   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:27.189732   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:27.189759   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:25.824267   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:27.826810   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:26.662688   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:28.663384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.248447   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.745919   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:29.747294   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:29.762194   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:29.762272   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:29.799103   62670 cri.go:89] found id: ""
	I0704 00:14:29.799132   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.799142   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:29.799149   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:29.799215   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:29.843373   62670 cri.go:89] found id: ""
	I0704 00:14:29.843399   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.843407   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:29.843412   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:29.843474   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:29.880622   62670 cri.go:89] found id: ""
	I0704 00:14:29.880650   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.880660   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:29.880667   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:29.880724   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:29.917560   62670 cri.go:89] found id: ""
	I0704 00:14:29.917590   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.917599   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:29.917605   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:29.917656   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:29.954983   62670 cri.go:89] found id: ""
	I0704 00:14:29.955006   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.955013   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:29.955018   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:29.955068   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:29.991784   62670 cri.go:89] found id: ""
	I0704 00:14:29.991811   62670 logs.go:276] 0 containers: []
	W0704 00:14:29.991819   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:29.991824   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:29.991870   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:30.031174   62670 cri.go:89] found id: ""
	I0704 00:14:30.031203   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.031210   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:30.031218   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:30.031268   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:30.069502   62670 cri.go:89] found id: ""
	I0704 00:14:30.069533   62670 logs.go:276] 0 containers: []
	W0704 00:14:30.069542   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:30.069552   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:30.069567   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:30.111185   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:30.111213   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:30.167419   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:30.167456   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.181876   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:30.181908   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:30.255378   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:30.255407   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:30.255426   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:32.837655   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:32.853085   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:32.853150   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:32.898490   62670 cri.go:89] found id: ""
	I0704 00:14:32.898520   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.898531   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:32.898540   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:32.898626   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:32.946293   62670 cri.go:89] found id: ""
	I0704 00:14:32.946326   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.946336   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:32.946343   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:32.946402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:32.983499   62670 cri.go:89] found id: ""
	I0704 00:14:32.983529   62670 logs.go:276] 0 containers: []
	W0704 00:14:32.983540   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:32.983548   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:32.983610   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:33.022340   62670 cri.go:89] found id: ""
	I0704 00:14:33.022362   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.022370   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:33.022375   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:33.022420   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:33.066921   62670 cri.go:89] found id: ""
	I0704 00:14:33.066946   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.066956   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:33.066963   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:33.067024   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:33.116317   62670 cri.go:89] found id: ""
	I0704 00:14:33.116340   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.116348   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:33.116354   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:33.116416   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:33.153301   62670 cri.go:89] found id: ""
	I0704 00:14:33.153332   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.153343   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:33.153350   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:33.153411   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:33.190851   62670 cri.go:89] found id: ""
	I0704 00:14:33.190884   62670 logs.go:276] 0 containers: []
	W0704 00:14:33.190896   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:33.190905   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:33.190917   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:33.248253   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:33.248288   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:30.323119   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:32.823348   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:31.161811   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.662270   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:34.246812   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.246992   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:33.263593   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:33.263620   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:33.339975   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:33.340000   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:33.340018   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:33.423768   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:33.423814   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.969547   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:35.984139   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:35.984219   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:36.028221   62670 cri.go:89] found id: ""
	I0704 00:14:36.028251   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.028263   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:36.028270   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:36.028330   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:36.067331   62670 cri.go:89] found id: ""
	I0704 00:14:36.067362   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.067370   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:36.067375   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:36.067437   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:36.105498   62670 cri.go:89] found id: ""
	I0704 00:14:36.105531   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.105543   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:36.105552   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:36.105618   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:36.144536   62670 cri.go:89] found id: ""
	I0704 00:14:36.144565   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.144576   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:36.144584   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:36.144652   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:36.184010   62670 cri.go:89] found id: ""
	I0704 00:14:36.184035   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.184048   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:36.184053   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:36.184099   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:36.221730   62670 cri.go:89] found id: ""
	I0704 00:14:36.221781   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.221790   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:36.221795   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:36.221843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:36.261907   62670 cri.go:89] found id: ""
	I0704 00:14:36.261940   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.261952   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:36.261959   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:36.262009   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:36.296878   62670 cri.go:89] found id: ""
	I0704 00:14:36.296899   62670 logs.go:276] 0 containers: []
	W0704 00:14:36.296906   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:36.296915   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:36.296926   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:36.350226   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:36.350265   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:36.364632   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:36.364663   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:36.446351   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:36.446382   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:36.446400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:36.535752   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:36.535802   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:35.322895   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:37.323357   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:36.166275   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.662345   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:38.745454   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.247351   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:39.079686   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:39.094225   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:39.094291   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:39.139521   62670 cri.go:89] found id: ""
	I0704 00:14:39.139551   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.139563   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:39.139572   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:39.139637   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:39.182411   62670 cri.go:89] found id: ""
	I0704 00:14:39.182439   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.182447   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:39.182453   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:39.182505   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:39.224135   62670 cri.go:89] found id: ""
	I0704 00:14:39.224158   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.224170   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:39.224175   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:39.224237   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:39.264800   62670 cri.go:89] found id: ""
	I0704 00:14:39.264829   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.264839   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:39.264847   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:39.264910   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:39.309072   62670 cri.go:89] found id: ""
	I0704 00:14:39.309102   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.309113   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:39.309121   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:39.309181   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:39.349790   62670 cri.go:89] found id: ""
	I0704 00:14:39.349818   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.349828   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:39.349835   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:39.349895   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:39.387062   62670 cri.go:89] found id: ""
	I0704 00:14:39.387093   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.387105   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:39.387112   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:39.387164   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:39.427503   62670 cri.go:89] found id: ""
	I0704 00:14:39.427530   62670 logs.go:276] 0 containers: []
	W0704 00:14:39.427538   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:39.427546   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:39.427558   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.442049   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:39.442076   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:39.525799   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:39.525824   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:39.525840   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:39.602646   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:39.602679   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:39.645739   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:39.645772   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.201986   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:42.216166   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:42.216236   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:42.253124   62670 cri.go:89] found id: ""
	I0704 00:14:42.253152   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.253167   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:42.253174   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:42.253231   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:42.293398   62670 cri.go:89] found id: ""
	I0704 00:14:42.293422   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.293430   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:42.293436   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:42.293488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:42.334382   62670 cri.go:89] found id: ""
	I0704 00:14:42.334412   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.334423   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:42.334430   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:42.334488   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:42.374792   62670 cri.go:89] found id: ""
	I0704 00:14:42.374820   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.374832   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:42.374838   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:42.374889   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:42.416220   62670 cri.go:89] found id: ""
	I0704 00:14:42.416251   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.416263   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:42.416271   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:42.416331   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:42.462923   62670 cri.go:89] found id: ""
	I0704 00:14:42.462955   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.462966   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:42.462974   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:42.463043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:42.503410   62670 cri.go:89] found id: ""
	I0704 00:14:42.503442   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.503452   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:42.503460   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:42.503528   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:42.542599   62670 cri.go:89] found id: ""
	I0704 00:14:42.542623   62670 logs.go:276] 0 containers: []
	W0704 00:14:42.542632   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:42.542639   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:42.542652   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:42.622303   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:42.622328   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:42.622347   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:42.703629   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:42.703666   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:42.747762   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:42.747793   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:42.803506   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:42.803549   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:39.826275   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:42.323764   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:41.163336   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.662061   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.664452   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:43.745575   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.250310   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:45.320238   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:45.334630   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:45.334692   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:45.376760   62670 cri.go:89] found id: ""
	I0704 00:14:45.376785   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.376793   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:14:45.376797   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:45.376882   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:45.414165   62670 cri.go:89] found id: ""
	I0704 00:14:45.414197   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.414208   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:14:45.414216   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:45.414278   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:45.451469   62670 cri.go:89] found id: ""
	I0704 00:14:45.451496   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.451504   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:14:45.451509   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:45.451558   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:45.487994   62670 cri.go:89] found id: ""
	I0704 00:14:45.488025   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.488037   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:14:45.488051   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:45.488110   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:45.529430   62670 cri.go:89] found id: ""
	I0704 00:14:45.529455   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.529463   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:14:45.529469   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:45.529520   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:45.571848   62670 cri.go:89] found id: ""
	I0704 00:14:45.571897   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.571909   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:14:45.571921   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:45.571994   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:45.607804   62670 cri.go:89] found id: ""
	I0704 00:14:45.607828   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.607835   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:45.607840   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:14:45.607908   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:14:45.644183   62670 cri.go:89] found id: ""
	I0704 00:14:45.644211   62670 logs.go:276] 0 containers: []
	W0704 00:14:45.644219   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:14:45.644227   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:45.644240   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:45.727677   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:14:45.727717   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:45.767528   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:45.767554   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:45.835243   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:45.835285   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:45.849921   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:45.849957   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:14:45.928404   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0704 00:14:44.823177   62327 pod_ready.go:102] pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:46.821947   62327 pod_ready.go:81] duration metric: took 4m0.006234793s for pod "metrics-server-569cc877fc-jpmsg" in "kube-system" namespace to be "Ready" ...
	E0704 00:14:46.821973   62327 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:14:46.821981   62327 pod_ready.go:38] duration metric: took 4m4.549820824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:14:46.821996   62327 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:14:46.822029   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:46.822072   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:46.884166   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:46.884208   62327 cri.go:89] found id: ""
	I0704 00:14:46.884217   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:46.884293   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.889964   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:46.890048   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:46.929569   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:46.929601   62327 cri.go:89] found id: ""
	I0704 00:14:46.929609   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:46.929653   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.934896   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:46.934969   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:46.975093   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:46.975116   62327 cri.go:89] found id: ""
	I0704 00:14:46.975125   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:46.975180   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:46.979604   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:46.979663   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:47.018423   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:47.018442   62327 cri.go:89] found id: ""
	I0704 00:14:47.018449   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:47.018514   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.022963   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:47.023028   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:47.067573   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.067599   62327 cri.go:89] found id: ""
	I0704 00:14:47.067608   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:47.067657   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.072342   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:47.072426   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:47.111485   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:47.111514   62327 cri.go:89] found id: ""
	I0704 00:14:47.111524   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:47.111581   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.116173   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:47.116256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:47.166673   62327 cri.go:89] found id: ""
	I0704 00:14:47.166703   62327 logs.go:276] 0 containers: []
	W0704 00:14:47.166711   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:47.166717   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:47.166771   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:47.209591   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:47.209626   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:47.209632   62327 cri.go:89] found id: ""
	I0704 00:14:47.209642   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:47.209699   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.214409   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:47.218745   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:47.218768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:47.762248   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:47.762293   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:47.819035   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:47.819077   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:47.874456   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:47.874499   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:47.931685   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:47.931714   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:47.969812   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:47.969842   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:48.023510   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:48.023547   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:48.067970   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:48.068001   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:48.121578   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:48.121609   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:48.139510   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:48.139535   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:48.264544   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:48.264570   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:48.329270   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:48.329311   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:48.371067   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:48.371097   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:48.162755   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.661630   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:48.428750   62670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:48.442617   62670 kubeadm.go:591] duration metric: took 4m1.823242959s to restartPrimaryControlPlane
	W0704 00:14:48.442701   62670 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:14:48.442735   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:14:51.574916   62670 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.132142314s)
	I0704 00:14:51.575001   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:51.593744   62670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:14:51.607429   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:14:51.620071   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:14:51.620097   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:14:51.620151   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:14:51.633472   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:14:51.633547   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:14:51.647551   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:14:51.658795   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:14:51.658871   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:14:51.671580   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.682217   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:14:51.682291   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:14:51.693874   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:14:51.705614   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:14:51.705697   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:14:51.720386   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:14:51.810530   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:14:51.810597   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:14:51.968629   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:14:51.968735   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:14:51.968851   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:14:52.188159   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:14:48.745609   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:50.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.190231   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:14:52.192011   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:14:52.192101   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:14:52.192206   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:14:52.192311   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:14:52.192412   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:14:52.192488   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:14:52.192573   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:14:52.192648   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:14:52.192747   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:14:52.193086   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:14:52.193249   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:14:52.193335   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:14:52.325727   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:14:52.485153   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:14:52.676389   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:14:52.990595   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:14:53.007051   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:14:53.008346   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:14:53.008434   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:14:53.160272   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:14:53.162449   62670 out.go:204]   - Booting up control plane ...
	I0704 00:14:53.162586   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:14:53.177983   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:14:53.179996   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:14:53.180911   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:14:53.183085   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:14:50.909242   62327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:14:50.926516   62327 api_server.go:72] duration metric: took 4m15.870455521s to wait for apiserver process to appear ...
	I0704 00:14:50.926548   62327 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:14:50.926594   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:50.926650   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:50.969608   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:50.969636   62327 cri.go:89] found id: ""
	I0704 00:14:50.969646   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:50.969711   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:50.974011   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:50.974081   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:51.016808   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:51.016842   62327 cri.go:89] found id: ""
	I0704 00:14:51.016858   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:51.016916   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.021297   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:51.021371   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:51.061674   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.061699   62327 cri.go:89] found id: ""
	I0704 00:14:51.061707   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:51.061761   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.066197   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:51.066256   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:51.108727   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.108750   62327 cri.go:89] found id: ""
	I0704 00:14:51.108759   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:51.108805   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.113366   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:51.113425   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:51.156701   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:51.156728   62327 cri.go:89] found id: ""
	I0704 00:14:51.156738   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:51.156803   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.162817   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:51.162891   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:51.208586   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.208609   62327 cri.go:89] found id: ""
	I0704 00:14:51.208618   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:51.208678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.213344   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:51.213418   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:51.258697   62327 cri.go:89] found id: ""
	I0704 00:14:51.258721   62327 logs.go:276] 0 containers: []
	W0704 00:14:51.258728   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:51.258733   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:51.258783   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:51.301317   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.301341   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.301347   62327 cri.go:89] found id: ""
	I0704 00:14:51.301355   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:51.301460   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.306678   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:51.310993   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:51.311014   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:51.433280   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:51.433313   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:51.498289   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:51.498325   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:51.538414   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:51.538449   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:51.580194   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:51.580232   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:51.650010   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:51.650055   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:51.710727   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:51.710768   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:51.785923   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:51.785963   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:51.803951   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:51.803982   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:51.873020   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:51.873058   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:51.916694   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:51.916725   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:52.378056   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:52.378103   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:52.436795   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:52.436835   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:52.662586   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.162992   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:52.746973   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:55.248126   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:54.977972   62327 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0704 00:14:54.982697   62327 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0704 00:14:54.983848   62327 api_server.go:141] control plane version: v1.30.2
	I0704 00:14:54.983868   62327 api_server.go:131] duration metric: took 4.057311938s to wait for apiserver health ...
	I0704 00:14:54.983887   62327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:14:54.983920   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:14:54.983972   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:14:55.022812   62327 cri.go:89] found id: "2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.022839   62327 cri.go:89] found id: ""
	I0704 00:14:55.022849   62327 logs.go:276] 1 containers: [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d]
	I0704 00:14:55.022906   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.027419   62327 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:14:55.027508   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:14:55.070889   62327 cri.go:89] found id: "e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:55.070914   62327 cri.go:89] found id: ""
	I0704 00:14:55.070924   62327 logs.go:276] 1 containers: [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864]
	I0704 00:14:55.070979   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.075970   62327 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:14:55.076036   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:14:55.121555   62327 cri.go:89] found id: "ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:55.121575   62327 cri.go:89] found id: ""
	I0704 00:14:55.121583   62327 logs.go:276] 1 containers: [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3]
	I0704 00:14:55.121627   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.126320   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:14:55.126378   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:14:55.168032   62327 cri.go:89] found id: "bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:55.168062   62327 cri.go:89] found id: ""
	I0704 00:14:55.168070   62327 logs.go:276] 1 containers: [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0]
	I0704 00:14:55.168134   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.172992   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:14:55.173069   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:14:55.215593   62327 cri.go:89] found id: "0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:55.215614   62327 cri.go:89] found id: ""
	I0704 00:14:55.215621   62327 logs.go:276] 1 containers: [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78]
	I0704 00:14:55.215668   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.220129   62327 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:14:55.220203   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:14:55.266429   62327 cri.go:89] found id: "49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:55.266458   62327 cri.go:89] found id: ""
	I0704 00:14:55.266467   62327 logs.go:276] 1 containers: [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d]
	I0704 00:14:55.266525   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.275640   62327 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:14:55.275706   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:14:55.316569   62327 cri.go:89] found id: ""
	I0704 00:14:55.316603   62327 logs.go:276] 0 containers: []
	W0704 00:14:55.316615   62327 logs.go:278] No container was found matching "kindnet"
	I0704 00:14:55.316622   62327 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:14:55.316682   62327 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:14:55.354222   62327 cri.go:89] found id: "5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.354248   62327 cri.go:89] found id: "0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.354252   62327 cri.go:89] found id: ""
	I0704 00:14:55.354259   62327 logs.go:276] 2 containers: [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085]
	I0704 00:14:55.354305   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.359060   62327 ssh_runner.go:195] Run: which crictl
	I0704 00:14:55.363522   62327 logs.go:123] Gathering logs for storage-provisioner [0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085] ...
	I0704 00:14:55.363545   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a20f1a805446bca7f58caf6dbf928e5f0f42daef90878a5a2e0e4a0d1187085"
	I0704 00:14:55.402950   62327 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:14:55.402975   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:14:55.826071   62327 logs.go:123] Gathering logs for kubelet ...
	I0704 00:14:55.826108   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:14:55.882804   62327 logs.go:123] Gathering logs for storage-provisioner [5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f] ...
	I0704 00:14:55.882836   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5718f2328eaa9db72a54df21230a2a05a5eecaf0e0a16ebf190ae8117ecc822f"
	I0704 00:14:55.924690   62327 logs.go:123] Gathering logs for kube-apiserver [2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d] ...
	I0704 00:14:55.924726   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c26905e9827162401941ba8dbf1c28733e1fbb59ab27d44b021c42f4682b16d"
	I0704 00:14:55.981466   62327 logs.go:123] Gathering logs for etcd [e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864] ...
	I0704 00:14:55.981500   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2490c1548394186fc58f17dd02a60d82313354065fe4757e9925c34f73ae864"
	I0704 00:14:56.043846   62327 logs.go:123] Gathering logs for coredns [ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3] ...
	I0704 00:14:56.043914   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbd6757ef6ec67213d6202406f275d1b97ba48c359aa271938d5cf400387ee3"
	I0704 00:14:56.085096   62327 logs.go:123] Gathering logs for kube-scheduler [bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0] ...
	I0704 00:14:56.085122   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bac9db9686284f4af06995ae21bcca4182659cf9235c99d57819583f0640f5f0"
	I0704 00:14:56.127568   62327 logs.go:123] Gathering logs for kube-proxy [0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78] ...
	I0704 00:14:56.127601   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0758cc11c578a1895e1a46789440b80e9039b305d7fd53478133c43224be3e78"
	I0704 00:14:56.169457   62327 logs.go:123] Gathering logs for kube-controller-manager [49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d] ...
	I0704 00:14:56.169492   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49302273be8ed500b6bbd657a9087632abe0b21f3f9ff0ca4c38cfea015b088d"
	I0704 00:14:56.224005   62327 logs.go:123] Gathering logs for dmesg ...
	I0704 00:14:56.224039   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:14:56.240031   62327 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:14:56.240059   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:14:56.366718   62327 logs.go:123] Gathering logs for container status ...
	I0704 00:14:56.366759   62327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:14:58.924300   62327 system_pods.go:59] 8 kube-system pods found
	I0704 00:14:58.924332   62327 system_pods.go:61] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.924339   62327 system_pods.go:61] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.924344   62327 system_pods.go:61] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.924351   62327 system_pods.go:61] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.924355   62327 system_pods.go:61] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.924360   62327 system_pods.go:61] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.924369   62327 system_pods.go:61] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.924376   62327 system_pods.go:61] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.924384   62327 system_pods.go:74] duration metric: took 3.940490235s to wait for pod list to return data ...
	I0704 00:14:58.924392   62327 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:14:58.926911   62327 default_sa.go:45] found service account: "default"
	I0704 00:14:58.926930   62327 default_sa.go:55] duration metric: took 2.52887ms for default service account to be created ...
	I0704 00:14:58.926938   62327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:14:58.933142   62327 system_pods.go:86] 8 kube-system pods found
	I0704 00:14:58.933173   62327 system_pods.go:89] "coredns-7db6d8ff4d-2bn7d" [e6d756a8-df4e-414b-b44c-32fb728c6feb] Running
	I0704 00:14:58.933181   62327 system_pods.go:89] "etcd-embed-certs-687975" [72f47af0-57ca-4529-b6e4-92f543f8ada9] Running
	I0704 00:14:58.933188   62327 system_pods.go:89] "kube-apiserver-embed-certs-687975" [e58beb1b-1984-4a9f-beb1-597cca55a0c0] Running
	I0704 00:14:58.933200   62327 system_pods.go:89] "kube-controller-manager-embed-certs-687975" [bdae6f32-b454-4a66-a3d1-a22c2a64073d] Running
	I0704 00:14:58.933207   62327 system_pods.go:89] "kube-proxy-9phtm" [6b5a4c0e-632d-4c1c-bfa7-f53448618efb] Running
	I0704 00:14:58.933213   62327 system_pods.go:89] "kube-scheduler-embed-certs-687975" [72640f73-25f5-47ec-8da0-396fc31fa653] Running
	I0704 00:14:58.933225   62327 system_pods.go:89] "metrics-server-569cc877fc-jpmsg" [e2561edc-d580-461c-acae-218e6b7a2f67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:14:58.933234   62327 system_pods.go:89] "storage-provisioner" [1ac6edec-3e4e-42bd-8848-1388594611e1] Running
	I0704 00:14:58.933245   62327 system_pods.go:126] duration metric: took 6.300951ms to wait for k8s-apps to be running ...
	I0704 00:14:58.933257   62327 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:14:58.933302   62327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:14:58.948861   62327 system_svc.go:56] duration metric: took 15.596446ms WaitForService to wait for kubelet
	I0704 00:14:58.948885   62327 kubeadm.go:576] duration metric: took 4m23.892830394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:14:58.948905   62327 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:14:58.951958   62327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:14:58.951981   62327 node_conditions.go:123] node cpu capacity is 2
	I0704 00:14:58.951991   62327 node_conditions.go:105] duration metric: took 3.081821ms to run NodePressure ...
	I0704 00:14:58.952003   62327 start.go:240] waiting for startup goroutines ...
	I0704 00:14:58.952012   62327 start.go:245] waiting for cluster config update ...
	I0704 00:14:58.952026   62327 start.go:254] writing updated cluster config ...
	I0704 00:14:58.952305   62327 ssh_runner.go:195] Run: rm -f paused
	I0704 00:14:59.001106   62327 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:14:59.003224   62327 out.go:177] * Done! kubectl is now configured to use "embed-certs-687975" cluster and "default" namespace by default
	I0704 00:14:57.163117   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:59.662680   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:14:57.746248   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:00.247122   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.161384   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.162095   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:02.745649   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:04.745980   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:07.245583   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:06.662618   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:08.665863   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:09.246591   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.745135   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:11.162596   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.163740   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.662576   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:13.745872   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:15.746141   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.161591   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.162965   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:18.245285   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:20.247546   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.662152   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.662781   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:22.745066   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:24.746068   62905 pod_ready.go:102] pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:25.247225   62905 pod_ready.go:81] duration metric: took 4m0.008398676s for pod "metrics-server-569cc877fc-v8qw2" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:25.247253   62905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0704 00:15:25.247263   62905 pod_ready.go:38] duration metric: took 4m1.998567833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:25.247295   62905 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:15:25.247337   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:25.247393   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:25.305703   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:25.305731   62905 cri.go:89] found id: ""
	I0704 00:15:25.305741   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:25.305811   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.311662   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:25.311740   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:25.359066   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:25.359091   62905 cri.go:89] found id: ""
	I0704 00:15:25.359100   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:25.359157   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.364430   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:25.364512   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:25.411897   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.411923   62905 cri.go:89] found id: ""
	I0704 00:15:25.411935   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:25.411991   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.416560   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:25.416629   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:25.457817   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:25.457844   62905 cri.go:89] found id: ""
	I0704 00:15:25.457853   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:25.457904   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.462323   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:25.462392   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:25.502180   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.502204   62905 cri.go:89] found id: ""
	I0704 00:15:25.502212   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:25.502256   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.506759   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:25.506817   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:25.546268   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:25.546292   62905 cri.go:89] found id: ""
	I0704 00:15:25.546306   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:25.546365   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.550998   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:25.551076   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:25.588722   62905 cri.go:89] found id: ""
	I0704 00:15:25.588752   62905 logs.go:276] 0 containers: []
	W0704 00:15:25.588762   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:25.588771   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:25.588832   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:25.628294   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.628328   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:25.628333   62905 cri.go:89] found id: ""
	I0704 00:15:25.628339   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:25.628406   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.633517   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:25.639383   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:25.639409   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:25.701468   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:25.701507   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:25.717059   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:25.717089   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:25.757597   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:25.757624   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:25.798648   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:25.798679   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:25.843607   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:25.843644   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:26.352356   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:26.352403   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:26.510039   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:26.510073   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:26.563036   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:26.563102   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:26.606221   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:26.606251   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:26.650488   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:26.650531   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:26.704905   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:26.704937   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:26.743843   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:26.743907   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:26.664421   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.160718   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:29.289651   62905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:15:29.313028   62905 api_server.go:72] duration metric: took 4m13.798223752s to wait for apiserver process to appear ...
	I0704 00:15:29.313062   62905 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:15:29.313101   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:29.313178   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:29.359867   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.359900   62905 cri.go:89] found id: ""
	I0704 00:15:29.359910   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:29.359965   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.364602   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:29.364661   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:29.406662   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.406690   62905 cri.go:89] found id: ""
	I0704 00:15:29.406697   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:29.406744   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.413217   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:29.413305   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:29.450066   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:29.450093   62905 cri.go:89] found id: ""
	I0704 00:15:29.450102   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:29.450163   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.454966   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:29.455025   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:29.496445   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:29.496465   62905 cri.go:89] found id: ""
	I0704 00:15:29.496471   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:29.496515   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.501125   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:29.501198   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:29.543841   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:29.543864   62905 cri.go:89] found id: ""
	I0704 00:15:29.543884   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:29.543940   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.548613   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:29.548673   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:29.588709   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:29.588729   62905 cri.go:89] found id: ""
	I0704 00:15:29.588735   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:29.588780   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.593039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:29.593098   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:29.631751   62905 cri.go:89] found id: ""
	I0704 00:15:29.631775   62905 logs.go:276] 0 containers: []
	W0704 00:15:29.631782   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:29.631787   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:29.631841   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:29.674894   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.674918   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:29.674922   62905 cri.go:89] found id: ""
	I0704 00:15:29.674929   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:29.674983   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.679600   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:29.683770   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:29.683788   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:29.731148   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:29.731182   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:29.772172   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:29.772204   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:29.816299   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:29.816332   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:30.222578   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:30.222622   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:30.284120   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:30.284169   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:30.300219   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:30.300260   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:30.423779   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:30.423851   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:30.480952   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:30.480993   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:30.526318   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:30.526352   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:30.574984   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:30.575012   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:30.618244   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:30.618275   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:30.657625   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:30.657649   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.184160   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:15:33.184894   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:33.185105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:31.162060   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.162393   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:35.164111   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:33.197007   62905 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8444/healthz ...
	I0704 00:15:33.201786   62905 api_server.go:279] https://192.168.50.164:8444/healthz returned 200:
	ok
	I0704 00:15:33.202719   62905 api_server.go:141] control plane version: v1.30.2
	I0704 00:15:33.202738   62905 api_server.go:131] duration metric: took 3.889668496s to wait for apiserver health ...
	I0704 00:15:33.202745   62905 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:15:33.202772   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:15:33.202825   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:15:33.246224   62905 cri.go:89] found id: "f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:33.246259   62905 cri.go:89] found id: ""
	I0704 00:15:33.246272   62905 logs.go:276] 1 containers: [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658]
	I0704 00:15:33.246343   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.256081   62905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:15:33.256160   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:15:33.296808   62905 cri.go:89] found id: "5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.296835   62905 cri.go:89] found id: ""
	I0704 00:15:33.296845   62905 logs.go:276] 1 containers: [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9]
	I0704 00:15:33.296902   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.301658   62905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:15:33.301729   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:15:33.353348   62905 cri.go:89] found id: "7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.353370   62905 cri.go:89] found id: ""
	I0704 00:15:33.353377   62905 logs.go:276] 1 containers: [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a]
	I0704 00:15:33.353428   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.358334   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:15:33.358413   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:15:33.402593   62905 cri.go:89] found id: "06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.402621   62905 cri.go:89] found id: ""
	I0704 00:15:33.402630   62905 logs.go:276] 1 containers: [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8]
	I0704 00:15:33.402696   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.407413   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:15:33.407482   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:15:33.461567   62905 cri.go:89] found id: "54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.461591   62905 cri.go:89] found id: ""
	I0704 00:15:33.461599   62905 logs.go:276] 1 containers: [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d]
	I0704 00:15:33.461663   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.467039   62905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:15:33.467115   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:15:33.510115   62905 cri.go:89] found id: "13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.510146   62905 cri.go:89] found id: ""
	I0704 00:15:33.510155   62905 logs.go:276] 1 containers: [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e]
	I0704 00:15:33.510215   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.515217   62905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:15:33.515281   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:15:33.554690   62905 cri.go:89] found id: ""
	I0704 00:15:33.554719   62905 logs.go:276] 0 containers: []
	W0704 00:15:33.554729   62905 logs.go:278] No container was found matching "kindnet"
	I0704 00:15:33.554737   62905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0704 00:15:33.554790   62905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0704 00:15:33.601911   62905 cri.go:89] found id: "916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:33.601937   62905 cri.go:89] found id: "ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:33.601944   62905 cri.go:89] found id: ""
	I0704 00:15:33.601952   62905 logs.go:276] 2 containers: [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2]
	I0704 00:15:33.602016   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.606884   62905 ssh_runner.go:195] Run: which crictl
	I0704 00:15:33.611328   62905 logs.go:123] Gathering logs for etcd [5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9] ...
	I0704 00:15:33.611356   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5629c8085daeb30fa0ef40336dbb648e23c8677f68a1d490ec19eee9b1b73ab9"
	I0704 00:15:33.657445   62905 logs.go:123] Gathering logs for coredns [7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a] ...
	I0704 00:15:33.657484   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dc19c0e5a3a027ab55d690315d77a185fc8183ea2157ac9418525773852449a"
	I0704 00:15:33.698153   62905 logs.go:123] Gathering logs for kube-scheduler [06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8] ...
	I0704 00:15:33.698185   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06f36aa92a09f5af030e8fbfd0a73dda73ca5763bc91a99477a158f8791886d8"
	I0704 00:15:33.740393   62905 logs.go:123] Gathering logs for kube-proxy [54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d] ...
	I0704 00:15:33.740425   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecbdc0a4753f88bc8db5d2c66bb6a778d3e3bd53917d3f31075a625e7bc15d"
	I0704 00:15:33.781017   62905 logs.go:123] Gathering logs for kube-controller-manager [13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e] ...
	I0704 00:15:33.781048   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a8615c20433d0a31a5c894c8027a4dc323dbaef5b1f47f735f86d44ca2054e"
	I0704 00:15:33.844822   62905 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:15:33.844857   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0704 00:15:33.966652   62905 logs.go:123] Gathering logs for kube-apiserver [f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658] ...
	I0704 00:15:33.966689   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f69caa2d9d0a4e466a84568cc8ea85740a7e3b689a3ecbb88d6f162892100658"
	I0704 00:15:34.022085   62905 logs.go:123] Gathering logs for storage-provisioner [916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a] ...
	I0704 00:15:34.022123   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 916f2ecfce3c5afcba6a7fe354707c9e6aeeabc8d49e456411cbc05bc162da0a"
	I0704 00:15:34.063492   62905 logs.go:123] Gathering logs for storage-provisioner [ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2] ...
	I0704 00:15:34.063515   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee9747ce58de5146320f42df0bb5a19976097703e87af4e5c217cbfadd3912a2"
	I0704 00:15:34.102349   62905 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:15:34.102379   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:15:34.472244   62905 logs.go:123] Gathering logs for container status ...
	I0704 00:15:34.472282   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:15:34.525394   62905 logs.go:123] Gathering logs for kubelet ...
	I0704 00:15:34.525427   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:15:34.581994   62905 logs.go:123] Gathering logs for dmesg ...
	I0704 00:15:34.582040   62905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:15:37.108663   62905 system_pods.go:59] 8 kube-system pods found
	I0704 00:15:37.108698   62905 system_pods.go:61] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.108705   62905 system_pods.go:61] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.108710   62905 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.108716   62905 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.108723   62905 system_pods.go:61] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.108728   62905 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.108734   62905 system_pods.go:61] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.108739   62905 system_pods.go:61] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.108746   62905 system_pods.go:74] duration metric: took 3.905995932s to wait for pod list to return data ...
	I0704 00:15:37.108756   62905 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:15:37.112853   62905 default_sa.go:45] found service account: "default"
	I0704 00:15:37.112885   62905 default_sa.go:55] duration metric: took 4.115587ms for default service account to be created ...
	I0704 00:15:37.112897   62905 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:15:37.119709   62905 system_pods.go:86] 8 kube-system pods found
	I0704 00:15:37.119743   62905 system_pods.go:89] "coredns-7db6d8ff4d-jmq4s" [f9725f92-7635-4111-bf63-66dbef0155b2] Running
	I0704 00:15:37.119749   62905 system_pods.go:89] "etcd-default-k8s-diff-port-995404" [d5065c53-cda8-4c79-9d88-de10341356a8] Running
	I0704 00:15:37.119754   62905 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-995404" [81395c0d-d19c-4d3d-8935-c35a9507abdd] Running
	I0704 00:15:37.119759   62905 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-995404" [36828d21-843b-409c-a0b3-60293cb50c27] Running
	I0704 00:15:37.119765   62905 system_pods.go:89] "kube-proxy-pplqq" [3b74a8c2-1e91-449d-9be9-8891459dccbc] Running
	I0704 00:15:37.119769   62905 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-995404" [e87cc576-7a8d-43dd-9778-b13d751976be] Running
	I0704 00:15:37.119776   62905 system_pods.go:89] "metrics-server-569cc877fc-v8qw2" [d6a67fb7-5004-4c93-9023-fc470f786ae9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:15:37.119782   62905 system_pods.go:89] "storage-provisioner" [3adc3ff6-282f-4f53-879f-c73d71c76b74] Running
	I0704 00:15:37.119791   62905 system_pods.go:126] duration metric: took 6.888276ms to wait for k8s-apps to be running ...
	I0704 00:15:37.119798   62905 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:15:37.119855   62905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:15:37.138387   62905 system_svc.go:56] duration metric: took 18.578212ms WaitForService to wait for kubelet
	I0704 00:15:37.138430   62905 kubeadm.go:576] duration metric: took 4m21.623631424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:15:37.138450   62905 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:15:37.141610   62905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:15:37.141632   62905 node_conditions.go:123] node cpu capacity is 2
	I0704 00:15:37.141642   62905 node_conditions.go:105] duration metric: took 3.187777ms to run NodePressure ...
	I0704 00:15:37.141654   62905 start.go:240] waiting for startup goroutines ...
	I0704 00:15:37.141662   62905 start.go:245] waiting for cluster config update ...
	I0704 00:15:37.141675   62905 start.go:254] writing updated cluster config ...
	I0704 00:15:37.141954   62905 ssh_runner.go:195] Run: rm -f paused
	I0704 00:15:37.193685   62905 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:15:37.196118   62905 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-995404" cluster and "default" namespace by default
	I0704 00:15:38.185821   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:38.186070   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:37.662971   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:40.161724   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:42.162761   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:44.661578   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.186610   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:15:48.186866   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:15:46.661793   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:48.662395   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:51.161671   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:53.161831   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:55.162342   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:57.162917   62043 pod_ready.go:102] pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace has status "Ready":"False"
	I0704 00:15:58.655566   62043 pod_ready.go:81] duration metric: took 4m0.000513164s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" ...
	E0704 00:15:58.655607   62043 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-qn22n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0704 00:15:58.655629   62043 pod_ready.go:38] duration metric: took 4m12.325655973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:15:58.655653   62043 kubeadm.go:591] duration metric: took 4m19.340193897s to restartPrimaryControlPlane
	W0704 00:15:58.655707   62043 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0704 00:15:58.655731   62043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:08.187652   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:08.187954   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:30.729510   62043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.073753748s)
	I0704 00:16:30.729594   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:30.747332   62043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0704 00:16:30.758903   62043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:30.769754   62043 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:30.769782   62043 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:30.769834   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:30.783216   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:30.783292   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:30.794254   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:30.804395   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:30.804456   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:30.816148   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.826591   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:30.826658   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:30.837473   62043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:30.847334   62043 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:30.847423   62043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:30.859291   62043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:31.068598   62043 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:39.927189   62043 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0704 00:16:39.927297   62043 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:39.927381   62043 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:39.927496   62043 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:39.927641   62043 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:39.927747   62043 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:39.929258   62043 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:39.929332   62043 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:39.929422   62043 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:39.929546   62043 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:39.929631   62043 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:39.929715   62043 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:39.929781   62043 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:39.929883   62043 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:39.929983   62043 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:39.930088   62043 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:39.930191   62043 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:39.930258   62043 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:39.930346   62043 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:39.930428   62043 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:39.930521   62043 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0704 00:16:39.930604   62043 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:39.930691   62043 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:39.930784   62043 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:39.930889   62043 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:39.930980   62043 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:39.933368   62043 out.go:204]   - Booting up control plane ...
	I0704 00:16:39.933482   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:39.933577   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:39.933657   62043 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:39.933769   62043 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:39.933857   62043 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:39.933920   62043 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:39.934046   62043 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0704 00:16:39.934156   62043 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0704 00:16:39.934219   62043 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004952327s
	I0704 00:16:39.934310   62043 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0704 00:16:39.934393   62043 kubeadm.go:309] [api-check] The API server is healthy after 5.002935516s
	I0704 00:16:39.934509   62043 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0704 00:16:39.934646   62043 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0704 00:16:39.934725   62043 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0704 00:16:39.934894   62043 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-317739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0704 00:16:39.934979   62043 kubeadm.go:309] [bootstrap-token] Using token: 6e60zb.ppocm8st59m5ngyp
	I0704 00:16:39.936353   62043 out.go:204]   - Configuring RBAC rules ...
	I0704 00:16:39.936457   62043 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0704 00:16:39.936546   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0704 00:16:39.936726   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0704 00:16:39.936866   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0704 00:16:39.936999   62043 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0704 00:16:39.937127   62043 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0704 00:16:39.937268   62043 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0704 00:16:39.937339   62043 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0704 00:16:39.937398   62043 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0704 00:16:39.937407   62043 kubeadm.go:309] 
	I0704 00:16:39.937486   62043 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0704 00:16:39.937500   62043 kubeadm.go:309] 
	I0704 00:16:39.937589   62043 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0704 00:16:39.937598   62043 kubeadm.go:309] 
	I0704 00:16:39.937628   62043 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0704 00:16:39.937704   62043 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0704 00:16:39.937770   62043 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0704 00:16:39.937779   62043 kubeadm.go:309] 
	I0704 00:16:39.937870   62043 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0704 00:16:39.937884   62043 kubeadm.go:309] 
	I0704 00:16:39.937953   62043 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0704 00:16:39.937966   62043 kubeadm.go:309] 
	I0704 00:16:39.938045   62043 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0704 00:16:39.938151   62043 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0704 00:16:39.938248   62043 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0704 00:16:39.938257   62043 kubeadm.go:309] 
	I0704 00:16:39.938373   62043 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0704 00:16:39.938469   62043 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0704 00:16:39.938483   62043 kubeadm.go:309] 
	I0704 00:16:39.938602   62043 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.938721   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 \
	I0704 00:16:39.938740   62043 kubeadm.go:309] 	--control-plane 
	I0704 00:16:39.938746   62043 kubeadm.go:309] 
	I0704 00:16:39.938820   62043 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0704 00:16:39.938829   62043 kubeadm.go:309] 
	I0704 00:16:39.938898   62043 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6e60zb.ppocm8st59m5ngyp \
	I0704 00:16:39.939042   62043 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8e13ff45eb63999bc003e7ef64ea1f8908b9cef161c98e0fba45cdbcfa40a34 
	I0704 00:16:39.939066   62043 cni.go:84] Creating CNI manager for ""
	I0704 00:16:39.939074   62043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0704 00:16:39.940769   62043 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0704 00:16:39.941987   62043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0704 00:16:39.956586   62043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0704 00:16:39.980480   62043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0704 00:16:39.980534   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:39.980553   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-317739 minikube.k8s.io/updated_at=2024_07_04T00_16_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e minikube.k8s.io/name=no-preload-317739 minikube.k8s.io/primary=true
	I0704 00:16:40.010512   62043 ops.go:34] apiserver oom_adj: -16
	I0704 00:16:40.194381   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:40.695349   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.195310   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:41.695082   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.194751   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:42.694568   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.195382   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:43.695072   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.195353   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:44.695020   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.195396   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:45.695273   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.189618   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:16:48.189879   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:16:48.189893   62670 kubeadm.go:309] 
	I0704 00:16:48.189956   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:16:48.190000   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:16:48.190006   62670 kubeadm.go:309] 
	I0704 00:16:48.190074   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:16:48.190142   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:16:48.190322   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:16:48.190356   62670 kubeadm.go:309] 
	I0704 00:16:48.190487   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:16:48.190540   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:16:48.190594   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:16:48.190603   62670 kubeadm.go:309] 
	I0704 00:16:48.190729   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:16:48.190826   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:16:48.190837   62670 kubeadm.go:309] 
	I0704 00:16:48.190930   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:16:48.191004   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:16:48.191088   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:16:48.191183   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:16:48.191195   62670 kubeadm.go:309] 
	I0704 00:16:48.192106   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:16:48.192225   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:16:48.192330   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0704 00:16:48.192450   62670 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0704 00:16:48.192496   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0704 00:16:48.668935   62670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:48.685425   62670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0704 00:16:48.697089   62670 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0704 00:16:48.697111   62670 kubeadm.go:156] found existing configuration files:
	
	I0704 00:16:48.697182   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0704 00:16:48.708605   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0704 00:16:48.708681   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0704 00:16:48.720739   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0704 00:16:48.733032   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0704 00:16:48.733106   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0704 00:16:48.745632   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.756211   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0704 00:16:48.756285   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0704 00:16:48.768006   62670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0704 00:16:48.779384   62670 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0704 00:16:48.779455   62670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0704 00:16:48.791913   62670 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0704 00:16:48.873701   62670 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0704 00:16:48.873789   62670 kubeadm.go:309] [preflight] Running pre-flight checks
	I0704 00:16:49.029961   62670 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0704 00:16:49.030077   62670 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0704 00:16:49.030191   62670 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0704 00:16:49.228954   62670 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0704 00:16:49.231477   62670 out.go:204]   - Generating certificates and keys ...
	I0704 00:16:49.231594   62670 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0704 00:16:49.231678   62670 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0704 00:16:49.231783   62670 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0704 00:16:49.231855   62670 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0704 00:16:49.231990   62670 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0704 00:16:49.232082   62670 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0704 00:16:49.232167   62670 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0704 00:16:49.232930   62670 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0704 00:16:49.234476   62670 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0704 00:16:49.235558   62670 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0704 00:16:49.235951   62670 kubeadm.go:309] [certs] Using the existing "sa" key
	I0704 00:16:49.236048   62670 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0704 00:16:49.418256   62670 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0704 00:16:49.476591   62670 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0704 00:16:49.586596   62670 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0704 00:16:49.856731   62670 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0704 00:16:49.878852   62670 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0704 00:16:49.885877   62670 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0704 00:16:49.885948   62670 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0704 00:16:50.048252   62670 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0704 00:16:46.194714   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:46.695192   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.195476   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:47.694768   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.194497   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:48.695370   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.194707   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:49.695417   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.194404   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.694941   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:50.050273   62670 out.go:204]   - Booting up control plane ...
	I0704 00:16:50.050428   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0704 00:16:50.055514   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0704 00:16:50.056609   62670 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0704 00:16:50.057448   62670 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0704 00:16:50.060021   62670 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0704 00:16:51.194471   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:51.695481   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.194406   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:52.695193   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.194613   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.695053   62043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0704 00:16:53.812778   62043 kubeadm.go:1107] duration metric: took 13.832294794s to wait for elevateKubeSystemPrivileges
	W0704 00:16:53.812817   62043 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0704 00:16:53.812828   62043 kubeadm.go:393] duration metric: took 5m14.556024253s to StartCluster
	I0704 00:16:53.812849   62043 settings.go:142] acquiring lock: {Name:mkf1def63ccbbf980d681727f990f4b5f478bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.812944   62043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0704 00:16:53.815420   62043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/kubeconfig: {Name:mkaf87debe8a4649b5774d57e368017c11eaa4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0704 00:16:53.815750   62043 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.109 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0704 00:16:53.815862   62043 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0704 00:16:53.815956   62043 addons.go:69] Setting storage-provisioner=true in profile "no-preload-317739"
	I0704 00:16:53.815987   62043 addons.go:234] Setting addon storage-provisioner=true in "no-preload-317739"
	I0704 00:16:53.815990   62043 config.go:182] Loaded profile config "no-preload-317739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	W0704 00:16:53.815998   62043 addons.go:243] addon storage-provisioner should already be in state true
	I0704 00:16:53.816029   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816023   62043 addons.go:69] Setting default-storageclass=true in profile "no-preload-317739"
	I0704 00:16:53.816052   62043 addons.go:69] Setting metrics-server=true in profile "no-preload-317739"
	I0704 00:16:53.816063   62043 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-317739"
	I0704 00:16:53.816091   62043 addons.go:234] Setting addon metrics-server=true in "no-preload-317739"
	W0704 00:16:53.816104   62043 addons.go:243] addon metrics-server should already be in state true
	I0704 00:16:53.816139   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816512   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816491   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816561   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.816590   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.816605   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.817558   62043 out.go:177] * Verifying Kubernetes components...
	I0704 00:16:53.818908   62043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0704 00:16:53.836028   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0704 00:16:53.836591   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837131   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.837162   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.837199   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0704 00:16:53.837270   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0704 00:16:53.837613   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.837621   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.837980   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838004   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838066   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.838265   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.838302   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.838330   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.838533   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.838555   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.838612   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.838911   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.839349   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.839374   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.842221   62043 addons.go:234] Setting addon default-storageclass=true in "no-preload-317739"
	W0704 00:16:53.842240   62043 addons.go:243] addon default-storageclass should already be in state true
	I0704 00:16:53.842267   62043 host.go:66] Checking if "no-preload-317739" exists ...
	I0704 00:16:53.842587   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.842606   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.854293   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0704 00:16:53.855044   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.855658   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.855675   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.856226   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.856425   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.858286   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0704 00:16:53.858484   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.858667   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.859270   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.859293   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.859815   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.860358   62043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0704 00:16:53.860380   62043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0704 00:16:53.860383   62043 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0704 00:16:53.861890   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0704 00:16:53.861914   62043 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0704 00:16:53.861937   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.864121   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0704 00:16:53.864570   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.865343   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.865366   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.865859   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866064   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.866282   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.866379   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.866407   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.866572   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.866780   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.866996   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.867166   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.868067   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.869898   62043 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0704 00:16:53.871321   62043 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:53.871339   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0704 00:16:53.871355   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.874930   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875361   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.875392   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.875623   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.875841   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.876024   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.876184   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:53.880965   62043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0704 00:16:53.881655   62043 main.go:141] libmachine: () Calling .GetVersion
	I0704 00:16:53.882115   62043 main.go:141] libmachine: Using API Version  1
	I0704 00:16:53.882130   62043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0704 00:16:53.882471   62043 main.go:141] libmachine: () Calling .GetMachineName
	I0704 00:16:53.882659   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetState
	I0704 00:16:53.884596   62043 main.go:141] libmachine: (no-preload-317739) Calling .DriverName
	I0704 00:16:53.884855   62043 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:53.884866   62043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0704 00:16:53.884879   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHHostname
	I0704 00:16:53.887764   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888336   62043 main.go:141] libmachine: (no-preload-317739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:87:12", ip: ""} in network mk-no-preload-317739: {Iface:virbr3 ExpiryTime:2024-07-04 01:00:59 +0000 UTC Type:0 Mac:52:54:00:2a:87:12 Iaid: IPaddr:192.168.61.109 Prefix:24 Hostname:no-preload-317739 Clientid:01:52:54:00:2a:87:12}
	I0704 00:16:53.888371   62043 main.go:141] libmachine: (no-preload-317739) DBG | domain no-preload-317739 has defined IP address 192.168.61.109 and MAC address 52:54:00:2a:87:12 in network mk-no-preload-317739
	I0704 00:16:53.888411   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHPort
	I0704 00:16:53.888619   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHKeyPath
	I0704 00:16:53.888749   62043 main.go:141] libmachine: (no-preload-317739) Calling .GetSSHUsername
	I0704 00:16:53.888849   62043 sshutil.go:53] new ssh client: &{IP:192.168.61.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/no-preload-317739/id_rsa Username:docker}
	I0704 00:16:54.097387   62043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0704 00:16:54.122578   62043 node_ready.go:35] waiting up to 6m0s for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136010   62043 node_ready.go:49] node "no-preload-317739" has status "Ready":"True"
	I0704 00:16:54.136036   62043 node_ready.go:38] duration metric: took 13.422954ms for node "no-preload-317739" to be "Ready" ...
	I0704 00:16:54.136048   62043 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:54.141532   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:54.200381   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0704 00:16:54.234350   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0704 00:16:54.284641   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0704 00:16:54.284664   62043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0704 00:16:54.346056   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0704 00:16:54.346081   62043 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0704 00:16:54.424564   62043 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.424593   62043 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0704 00:16:54.496088   62043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0704 00:16:54.977271   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977304   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977308   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977327   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977603   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977640   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977647   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977654   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977657   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977663   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:54.977665   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977710   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977756   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:54.977935   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977947   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:54.977959   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:54.977991   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:54.977999   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.037104   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.037130   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.037591   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.037626   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.037639   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331464   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331492   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331859   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.331895   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.331903   62043 main.go:141] libmachine: Making call to close driver server
	I0704 00:16:55.331911   62043 main.go:141] libmachine: (no-preload-317739) Calling .Close
	I0704 00:16:55.331926   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332178   62043 main.go:141] libmachine: (no-preload-317739) DBG | Closing plugin on server side
	I0704 00:16:55.332245   62043 main.go:141] libmachine: Successfully made call to close driver server
	I0704 00:16:55.332262   62043 main.go:141] libmachine: Making call to close connection to plugin binary
	I0704 00:16:55.332280   62043 addons.go:475] Verifying addon metrics-server=true in "no-preload-317739"
	I0704 00:16:55.334057   62043 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0704 00:16:55.335756   62043 addons.go:510] duration metric: took 1.519891021s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0704 00:16:56.152756   62043 pod_ready.go:102] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"False"
	I0704 00:16:56.650840   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.650866   62043 pod_ready.go:81] duration metric: took 2.509305019s for pod "coredns-7db6d8ff4d-cxq59" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.650876   62043 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656253   62043 pod_ready.go:92] pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.656276   62043 pod_ready.go:81] duration metric: took 5.391742ms for pod "coredns-7db6d8ff4d-qnrtm" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.656285   62043 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661076   62043 pod_ready.go:92] pod "etcd-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.661097   62043 pod_ready.go:81] duration metric: took 4.806155ms for pod "etcd-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.661105   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666895   62043 pod_ready.go:92] pod "kube-apiserver-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.666923   62043 pod_ready.go:81] duration metric: took 5.809974ms for pod "kube-apiserver-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.666936   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671252   62043 pod_ready.go:92] pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:56.671277   62043 pod_ready.go:81] duration metric: took 4.332286ms for pod "kube-controller-manager-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:56.671289   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046037   62043 pod_ready.go:92] pod "kube-proxy-xxfrd" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.046062   62043 pod_ready.go:81] duration metric: took 374.766496ms for pod "kube-proxy-xxfrd" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.046072   62043 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446038   62043 pod_ready.go:92] pod "kube-scheduler-no-preload-317739" in "kube-system" namespace has status "Ready":"True"
	I0704 00:16:57.446063   62043 pod_ready.go:81] duration metric: took 399.983632ms for pod "kube-scheduler-no-preload-317739" in "kube-system" namespace to be "Ready" ...
	I0704 00:16:57.446071   62043 pod_ready.go:38] duration metric: took 3.310013568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0704 00:16:57.446085   62043 api_server.go:52] waiting for apiserver process to appear ...
	I0704 00:16:57.446131   62043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0704 00:16:57.461033   62043 api_server.go:72] duration metric: took 3.645241569s to wait for apiserver process to appear ...
	I0704 00:16:57.461057   62043 api_server.go:88] waiting for apiserver healthz status ...
	I0704 00:16:57.461075   62043 api_server.go:253] Checking apiserver healthz at https://192.168.61.109:8443/healthz ...
	I0704 00:16:57.465509   62043 api_server.go:279] https://192.168.61.109:8443/healthz returned 200:
	ok
	I0704 00:16:57.466733   62043 api_server.go:141] control plane version: v1.30.2
	I0704 00:16:57.466755   62043 api_server.go:131] duration metric: took 5.690997ms to wait for apiserver health ...
	I0704 00:16:57.466764   62043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0704 00:16:57.651488   62043 system_pods.go:59] 9 kube-system pods found
	I0704 00:16:57.651514   62043 system_pods.go:61] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:57.651519   62043 system_pods.go:61] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:57.651522   62043 system_pods.go:61] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:57.651525   62043 system_pods.go:61] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:57.651528   62043 system_pods.go:61] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:57.651531   62043 system_pods.go:61] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:57.651533   62043 system_pods.go:61] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:57.651541   62043 system_pods.go:61] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:57.651549   62043 system_pods.go:61] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:57.651559   62043 system_pods.go:74] duration metric: took 184.788668ms to wait for pod list to return data ...
	I0704 00:16:57.651573   62043 default_sa.go:34] waiting for default service account to be created ...
	I0704 00:16:57.845632   62043 default_sa.go:45] found service account: "default"
	I0704 00:16:57.845665   62043 default_sa.go:55] duration metric: took 194.081328ms for default service account to be created ...
	I0704 00:16:57.845678   62043 system_pods.go:116] waiting for k8s-apps to be running ...
	I0704 00:16:58.050844   62043 system_pods.go:86] 9 kube-system pods found
	I0704 00:16:58.050873   62043 system_pods.go:89] "coredns-7db6d8ff4d-cxq59" [a7d3a64b-8d7d-455c-990c-0e496f8cf461] Running
	I0704 00:16:58.050878   62043 system_pods.go:89] "coredns-7db6d8ff4d-qnrtm" [3ab51a52-571c-4533-86d2-7293368ac2ee] Running
	I0704 00:16:58.050882   62043 system_pods.go:89] "etcd-no-preload-317739" [d1cdc7d3-b6b9-4fbf-a9a4-94309df12aec] Running
	I0704 00:16:58.050887   62043 system_pods.go:89] "kube-apiserver-no-preload-317739" [6caeae1f-258b-4d8f-8922-73eb94eb92cb] Running
	I0704 00:16:58.050891   62043 system_pods.go:89] "kube-controller-manager-no-preload-317739" [28b1a875-8c1d-41a3-8b0b-6e1b7a584a03] Running
	I0704 00:16:58.050896   62043 system_pods.go:89] "kube-proxy-xxfrd" [29b1b3ed-9c18-4fae-bf43-5da22cf90f6b] Running
	I0704 00:16:58.050900   62043 system_pods.go:89] "kube-scheduler-no-preload-317739" [bdb0442c-9e42-4e09-93cc-0e8dc067eaff] Running
	I0704 00:16:58.050906   62043 system_pods.go:89] "metrics-server-569cc877fc-t28ff" [942f97bf-57cf-46fe-9a10-4a4171357239] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0704 00:16:58.050911   62043 system_pods.go:89] "storage-provisioner" [d2ab9324-5df0-4232-aef4-be29bfc4c082] Running
	I0704 00:16:58.050918   62043 system_pods.go:126] duration metric: took 205.235998ms to wait for k8s-apps to be running ...
	I0704 00:16:58.050925   62043 system_svc.go:44] waiting for kubelet service to be running ....
	I0704 00:16:58.050969   62043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0704 00:16:58.066005   62043 system_svc.go:56] duration metric: took 15.072089ms WaitForService to wait for kubelet
	I0704 00:16:58.066036   62043 kubeadm.go:576] duration metric: took 4.250246725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0704 00:16:58.066060   62043 node_conditions.go:102] verifying NodePressure condition ...
	I0704 00:16:58.245974   62043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0704 00:16:58.245998   62043 node_conditions.go:123] node cpu capacity is 2
	I0704 00:16:58.246009   62043 node_conditions.go:105] duration metric: took 179.943846ms to run NodePressure ...
	I0704 00:16:58.246020   62043 start.go:240] waiting for startup goroutines ...
	I0704 00:16:58.246026   62043 start.go:245] waiting for cluster config update ...
	I0704 00:16:58.246036   62043 start.go:254] writing updated cluster config ...
	I0704 00:16:58.246307   62043 ssh_runner.go:195] Run: rm -f paused
	I0704 00:16:58.298998   62043 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0704 00:16:58.301199   62043 out.go:177] * Done! kubectl is now configured to use "no-preload-317739" cluster and "default" namespace by default
	I0704 00:17:30.062515   62670 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0704 00:17:30.062908   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:30.063105   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:35.063408   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:35.063668   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:17:45.064118   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:17:45.064391   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:05.065047   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:05.065263   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064458   62670 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0704 00:18:45.064676   62670 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0704 00:18:45.064703   62670 kubeadm.go:309] 
	I0704 00:18:45.064756   62670 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0704 00:18:45.064825   62670 kubeadm.go:309] 		timed out waiting for the condition
	I0704 00:18:45.064842   62670 kubeadm.go:309] 
	I0704 00:18:45.064918   62670 kubeadm.go:309] 	This error is likely caused by:
	I0704 00:18:45.064954   62670 kubeadm.go:309] 		- The kubelet is not running
	I0704 00:18:45.065086   62670 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0704 00:18:45.065110   62670 kubeadm.go:309] 
	I0704 00:18:45.065271   62670 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0704 00:18:45.065326   62670 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0704 00:18:45.065392   62670 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0704 00:18:45.065401   62670 kubeadm.go:309] 
	I0704 00:18:45.065550   62670 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0704 00:18:45.065631   62670 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0704 00:18:45.065638   62670 kubeadm.go:309] 
	I0704 00:18:45.065734   62670 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0704 00:18:45.065807   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0704 00:18:45.065871   62670 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0704 00:18:45.065939   62670 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0704 00:18:45.065947   62670 kubeadm.go:309] 
	I0704 00:18:45.066520   62670 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0704 00:18:45.066601   62670 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0704 00:18:45.066689   62670 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0704 00:18:45.066780   62670 kubeadm.go:393] duration metric: took 7m58.506286251s to StartCluster
	I0704 00:18:45.066839   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0704 00:18:45.066927   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0704 00:18:45.120297   62670 cri.go:89] found id: ""
	I0704 00:18:45.120326   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.120334   62670 logs.go:278] No container was found matching "kube-apiserver"
	I0704 00:18:45.120339   62670 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0704 00:18:45.120402   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0704 00:18:45.158038   62670 cri.go:89] found id: ""
	I0704 00:18:45.158064   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.158074   62670 logs.go:278] No container was found matching "etcd"
	I0704 00:18:45.158082   62670 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0704 00:18:45.158138   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0704 00:18:45.195937   62670 cri.go:89] found id: ""
	I0704 00:18:45.195967   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.195978   62670 logs.go:278] No container was found matching "coredns"
	I0704 00:18:45.195985   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0704 00:18:45.196043   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0704 00:18:45.236822   62670 cri.go:89] found id: ""
	I0704 00:18:45.236842   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.236850   62670 logs.go:278] No container was found matching "kube-scheduler"
	I0704 00:18:45.236856   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0704 00:18:45.236901   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0704 00:18:45.277811   62670 cri.go:89] found id: ""
	I0704 00:18:45.277840   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.277848   62670 logs.go:278] No container was found matching "kube-proxy"
	I0704 00:18:45.277854   62670 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0704 00:18:45.277915   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0704 00:18:45.318942   62670 cri.go:89] found id: ""
	I0704 00:18:45.318972   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.318984   62670 logs.go:278] No container was found matching "kube-controller-manager"
	I0704 00:18:45.318994   62670 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0704 00:18:45.319058   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0704 00:18:45.360745   62670 cri.go:89] found id: ""
	I0704 00:18:45.360772   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.360780   62670 logs.go:278] No container was found matching "kindnet"
	I0704 00:18:45.360785   62670 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0704 00:18:45.360843   62670 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0704 00:18:45.405336   62670 cri.go:89] found id: ""
	I0704 00:18:45.405359   62670 logs.go:276] 0 containers: []
	W0704 00:18:45.405370   62670 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0704 00:18:45.405381   62670 logs.go:123] Gathering logs for CRI-O ...
	I0704 00:18:45.405400   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0704 00:18:45.514196   62670 logs.go:123] Gathering logs for container status ...
	I0704 00:18:45.514237   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0704 00:18:45.560207   62670 logs.go:123] Gathering logs for kubelet ...
	I0704 00:18:45.560235   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0704 00:18:45.615066   62670 logs.go:123] Gathering logs for dmesg ...
	I0704 00:18:45.615113   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0704 00:18:45.630701   62670 logs.go:123] Gathering logs for describe nodes ...
	I0704 00:18:45.630731   62670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0704 00:18:45.725249   62670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0704 00:18:45.725281   62670 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0704 00:18:45.725315   62670 out.go:239] * 
	W0704 00:18:45.725360   62670 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.725383   62670 out.go:239] * 
	W0704 00:18:45.726603   62670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0704 00:18:45.729981   62670 out.go:177] 
	W0704 00:18:45.731124   62670 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0704 00:18:45.731169   62670 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0704 00:18:45.731186   62670 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0704 00:18:45.732514   62670 out.go:177] 
	
	
	==> CRI-O <==
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.312318844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052988312291177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5aee3045-ad3f-4f6a-b226-ba437463fa96 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.313170267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3fd3723-9317-41cf-acf8-049485f78ae8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.313253196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3fd3723-9317-41cf-acf8-049485f78ae8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.313310032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c3fd3723-9317-41cf-acf8-049485f78ae8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.352135508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8293b7f-d9dc-4a00-83c1-1f669e81986f name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.352279074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8293b7f-d9dc-4a00-83c1-1f669e81986f name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.353723852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceda94f4-182e-4a03-a62b-062d3fbbda01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.354303816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052988354269571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceda94f4-182e-4a03-a62b-062d3fbbda01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.354952556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d4d8945-92da-4a1f-96f5-306ddbcf5090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.355073575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d4d8945-92da-4a1f-96f5-306ddbcf5090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.355121461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d4d8945-92da-4a1f-96f5-306ddbcf5090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.392961074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10214492-bd97-4cc1-ac0c-cb3a64f330b1 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.393082670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10214492-bd97-4cc1-ac0c-cb3a64f330b1 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.394433565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98ea0ca7-6a6f-4a5f-8b16-a6859f55d8c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.395018111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052988394981948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98ea0ca7-6a6f-4a5f-8b16-a6859f55d8c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.395761343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30f6300f-3c84-4466-9886-250ab9bacfb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.395863597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30f6300f-3c84-4466-9886-250ab9bacfb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.395915364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30f6300f-3c84-4466-9886-250ab9bacfb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.440711293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68499c69-4e6d-45ba-b71d-c5c1aad3f2f3 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.440811658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68499c69-4e6d-45ba-b71d-c5c1aad3f2f3 name=/runtime.v1.RuntimeService/Version
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.442138467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7762c29a-6e39-4a6a-925c-f120f13207fa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.442613516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720052988442583529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7762c29a-6e39-4a6a-925c-f120f13207fa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.443225108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c887968-f779-4a9c-b89a-51275a9dcb99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.443294625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c887968-f779-4a9c-b89a-51275a9dcb99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 04 00:29:48 old-k8s-version-979033 crio[644]: time="2024-07-04 00:29:48.443371607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7c887968-f779-4a9c-b89a-51275a9dcb99 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 4 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054432] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041342] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731817] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.437901] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.394657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.740177] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.073688] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074920] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.184099] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.154476] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.272154] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.964143] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.063078] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.822817] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul 4 00:11] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 4 00:14] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul 4 00:16] systemd-fstab-generator[5229]: Ignoring "noauto" option for root device
	[  +0.072411] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:29:48 up 19 min,  0 users,  load average: 0.04, 0.04, 0.00
	Linux old-k8s-version-979033 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000169cb0)
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: goroutine 154 [select]:
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000983ef0, 0x4f0ac20, 0xc0007218b0, 0x1, 0xc00009e0c0)
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00073a2a0, 0xc00009e0c0)
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00021ef80, 0xc0007829a0)
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 04 00:29:47 old-k8s-version-979033 kubelet[6693]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 04 00:29:47 old-k8s-version-979033 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 04 00:29:47 old-k8s-version-979033 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 04 00:29:48 old-k8s-version-979033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Jul 04 00:29:48 old-k8s-version-979033 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 04 00:29:48 old-k8s-version-979033 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 04 00:29:48 old-k8s-version-979033 kubelet[6760]: I0704 00:29:48.538031    6760 server.go:416] Version: v1.20.0
	Jul 04 00:29:48 old-k8s-version-979033 kubelet[6760]: I0704 00:29:48.538493    6760 server.go:837] Client rotation is on, will bootstrap in background
	Jul 04 00:29:48 old-k8s-version-979033 kubelet[6760]: I0704 00:29:48.540650    6760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 04 00:29:48 old-k8s-version-979033 kubelet[6760]: W0704 00:29:48.541676    6760 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 04 00:29:48 old-k8s-version-979033 kubelet[6760]: I0704 00:29:48.542099    6760 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 2 (232.312399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-979033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.03s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.45
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 13.18
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 122.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.84
29 TestAddons/parallel/Registry 21
31 TestAddons/parallel/InspektorGadget 12.04
33 TestAddons/parallel/HelmTiller 12
35 TestAddons/parallel/CSI 61.76
36 TestAddons/parallel/Headlamp 13.97
37 TestAddons/parallel/CloudSpanner 6.6
38 TestAddons/parallel/LocalPath 55.51
39 TestAddons/parallel/NvidiaDevicePlugin 6.58
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.12
46 TestCertOptions 74.89
47 TestCertExpiration 244.37
49 TestForceSystemdFlag 56.41
50 TestForceSystemdEnv 50.67
52 TestKVMDriverInstallOrUpdate 4.9
56 TestErrorSpam/setup 44.83
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.76
59 TestErrorSpam/pause 1.62
60 TestErrorSpam/unpause 1.67
61 TestErrorSpam/stop 4.6
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 93.57
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.23
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
73 TestFunctional/serial/CacheCmd/cache/add_local 2.31
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.51
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.44
84 TestFunctional/serial/LogsFileCmd 1.48
85 TestFunctional/serial/InvalidService 4.41
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 35.35
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 1.32
95 TestFunctional/parallel/ServiceCmdConnect 11.6
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 50.62
99 TestFunctional/parallel/SSHCmd 0.39
100 TestFunctional/parallel/CpCmd 1.34
101 TestFunctional/parallel/MySQL 24.67
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.81
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.68
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
123 TestFunctional/parallel/ProfileCmd/profile_list 0.44
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
125 TestFunctional/parallel/MountCmd/any-port 8.53
126 TestFunctional/parallel/MountCmd/specific-port 1.81
127 TestFunctional/parallel/ServiceCmd/List 0.37
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
131 TestFunctional/parallel/ServiceCmd/Format 0.39
132 TestFunctional/parallel/ServiceCmd/URL 0.4
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 5.92
141 TestFunctional/parallel/ImageCommands/Setup 2.47
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.74
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.78
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.37
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.33
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.76
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.11
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.04
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 214.41
158 TestMultiControlPlane/serial/DeployApp 7.69
159 TestMultiControlPlane/serial/PingHostFromPods 1.23
160 TestMultiControlPlane/serial/AddWorkerNode 49.26
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
163 TestMultiControlPlane/serial/CopyFile 12.81
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.06
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.23
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
172 TestMultiControlPlane/serial/RestartCluster 349.92
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
174 TestMultiControlPlane/serial/AddSecondaryNode 72.13
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
179 TestJSONOutput/start/Command 58.58
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.76
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.69
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 98.61
211 TestMountStart/serial/StartWithMountFirst 24.44
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 28.75
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 23.06
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 97.15
223 TestMultiNode/serial/DeployApp2Nodes 5.57
224 TestMultiNode/serial/PingHostFrom2Pods 0.81
225 TestMultiNode/serial/AddNode 42.36
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.58
228 TestMultiNode/serial/CopyFile 7.18
229 TestMultiNode/serial/StopNode 2.28
230 TestMultiNode/serial/StartAfterStop 28.62
232 TestMultiNode/serial/DeleteNode 2.13
234 TestMultiNode/serial/RestartMultiNode 171.97
235 TestMultiNode/serial/ValidateNameConflict 47.86
242 TestScheduledStopUnix 114.36
246 TestRunningBinaryUpgrade 221.36
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
252 TestNoKubernetes/serial/StartWithK8s 96.63
253 TestNoKubernetes/serial/StartWithStopK8s 9.85
254 TestNoKubernetes/serial/Start 27.12
255 TestStoppedBinaryUpgrade/Setup 2.59
256 TestStoppedBinaryUpgrade/Upgrade 122.61
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 1.39
259 TestNoKubernetes/serial/Stop 1.34
260 TestNoKubernetes/serial/StartNoArgs 43.53
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
270 TestPause/serial/Start 57.09
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
280 TestNetworkPlugins/group/false 5.35
287 TestStartStop/group/no-preload/serial/FirstStart 123.58
289 TestStartStop/group/embed-certs/serial/FirstStart 95.43
290 TestStartStop/group/no-preload/serial/DeployApp 10.38
292 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.54
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
295 TestStartStop/group/embed-certs/serial/DeployApp 13.37
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
304 TestStartStop/group/no-preload/serial/SecondStart 697.78
306 TestStartStop/group/embed-certs/serial/SecondStart 545.46
307 TestStartStop/group/old-k8s-version/serial/Stop 2.34
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 515.04
321 TestStartStop/group/newest-cni/serial/FirstStart 59.99
322 TestNetworkPlugins/group/auto/Start 95.58
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
325 TestStartStop/group/newest-cni/serial/Stop 7.33
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
327 TestStartStop/group/newest-cni/serial/SecondStart 38.95
328 TestNetworkPlugins/group/kindnet/Start 65.69
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/newest-cni/serial/Pause 2.66
333 TestNetworkPlugins/group/calico/Start 110.77
334 TestNetworkPlugins/group/auto/KubeletFlags 0.24
335 TestNetworkPlugins/group/auto/NetCatPod 12.26
336 TestNetworkPlugins/group/auto/DNS 0.18
337 TestNetworkPlugins/group/auto/Localhost 0.15
338 TestNetworkPlugins/group/auto/HairPin 0.15
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/custom-flannel/Start 90.65
341 TestNetworkPlugins/group/flannel/Start 114.71
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
343 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
344 TestNetworkPlugins/group/kindnet/DNS 0.17
345 TestNetworkPlugins/group/kindnet/Localhost 0.14
346 TestNetworkPlugins/group/kindnet/HairPin 0.15
347 TestNetworkPlugins/group/bridge/Start 127.29
348 TestNetworkPlugins/group/calico/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.21
350 TestNetworkPlugins/group/calico/NetCatPod 12.24
351 TestNetworkPlugins/group/calico/DNS 0.19
352 TestNetworkPlugins/group/calico/Localhost 0.17
353 TestNetworkPlugins/group/calico/HairPin 0.16
354 TestNetworkPlugins/group/enable-default-cni/Start 101.43
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.58
357 TestNetworkPlugins/group/custom-flannel/DNS 0.22
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
360 TestNetworkPlugins/group/flannel/ControllerPod 6.01
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
362 TestNetworkPlugins/group/flannel/NetCatPod 11.25
363 TestNetworkPlugins/group/flannel/DNS 0.21
364 TestNetworkPlugins/group/flannel/Localhost 0.16
365 TestNetworkPlugins/group/flannel/HairPin 0.13
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
367 TestNetworkPlugins/group/bridge/NetCatPod 10.22
368 TestNetworkPlugins/group/bridge/DNS 0.16
369 TestNetworkPlugins/group/bridge/Localhost 0.13
370 TestNetworkPlugins/group/bridge/HairPin 0.13
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (42.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-666511 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-666511 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (42.447145302s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (42.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0703 22:47:29.548148   16574 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0703 22:47:29.548224   16574 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-666511
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-666511: exit status 85 (57.589451ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-666511 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC |          |
	|         | -p download-only-666511        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:46:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:46:47.138652   16587 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:46:47.138904   16587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:47.138914   16587 out.go:304] Setting ErrFile to fd 2...
	I0703 22:46:47.138920   16587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:47.139111   16587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	W0703 22:46:47.139254   16587 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18998-9396/.minikube/config/config.json: open /home/jenkins/minikube-integration/18998-9396/.minikube/config/config.json: no such file or directory
	I0703 22:46:47.139843   16587 out.go:298] Setting JSON to true
	I0703 22:46:47.140760   16587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1747,"bootTime":1720045060,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:46:47.140825   16587 start.go:139] virtualization: kvm guest
	I0703 22:46:47.143174   16587 out.go:97] [download-only-666511] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0703 22:46:47.143268   16587 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball: no such file or directory
	I0703 22:46:47.143310   16587 notify.go:220] Checking for updates...
	I0703 22:46:47.144790   16587 out.go:169] MINIKUBE_LOCATION=18998
	I0703 22:46:47.146158   16587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:46:47.147518   16587 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:46:47.148909   16587 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:46:47.150107   16587 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 22:46:47.152462   16587 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 22:46:47.152690   16587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:46:47.253920   16587 out.go:97] Using the kvm2 driver based on user configuration
	I0703 22:46:47.253951   16587 start.go:297] selected driver: kvm2
	I0703 22:46:47.253959   16587 start.go:901] validating driver "kvm2" against <nil>
	I0703 22:46:47.254299   16587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:46:47.254454   16587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:46:47.269241   16587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:46:47.269321   16587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 22:46:47.269848   16587 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0703 22:46:47.269999   16587 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 22:46:47.270067   16587 cni.go:84] Creating CNI manager for ""
	I0703 22:46:47.270082   16587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:46:47.270090   16587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 22:46:47.270145   16587 start.go:340] cluster config:
	{Name:download-only-666511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-666511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:46:47.270347   16587 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:46:47.272138   16587 out.go:97] Downloading VM boot image ...
	I0703 22:46:47.272206   16587 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 22:46:57.403714   16587 out.go:97] Starting "download-only-666511" primary control-plane node in "download-only-666511" cluster
	I0703 22:46:57.403748   16587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 22:46:57.514468   16587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0703 22:46:57.514505   16587 cache.go:56] Caching tarball of preloaded images
	I0703 22:46:57.514648   16587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 22:46:57.516868   16587 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0703 22:46:57.516899   16587 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0703 22:46:57.634936   16587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0703 22:47:12.148411   16587 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0703 22:47:12.148505   16587 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0703 22:47:13.184944   16587 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0703 22:47:13.185297   16587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/download-only-666511/config.json ...
	I0703 22:47:13.185330   16587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/download-only-666511/config.json: {Name:mk637dbd28a2bf08fb3f375161f829c9d2060b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:47:13.185498   16587 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0703 22:47:13.185679   16587 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-666511 host does not exist
	  To start a cluster, run: "minikube start -p download-only-666511"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-666511
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (13.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-240360 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-240360 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.176172327s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (13.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
I0703 22:47:43.033723   16574 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
I0703 22:47:43.033759   16574 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-240360
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-240360: exit status 85 (57.953459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-666511 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC |                     |
	|         | -p download-only-666511        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| delete  | -p download-only-666511        | download-only-666511 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC | 03 Jul 24 22:47 UTC |
	| start   | -o=json --download-only        | download-only-240360 | jenkins | v1.33.1 | 03 Jul 24 22:47 UTC |                     |
	|         | -p download-only-240360        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:47:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:47:29.894628   16910 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:47:29.895073   16910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:29.895124   16910 out.go:304] Setting ErrFile to fd 2...
	I0703 22:47:29.895141   16910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:47:29.895632   16910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 22:47:29.896638   16910 out.go:298] Setting JSON to true
	I0703 22:47:29.897475   16910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1790,"bootTime":1720045060,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:47:29.897542   16910 start.go:139] virtualization: kvm guest
	I0703 22:47:29.899429   16910 out.go:97] [download-only-240360] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:47:29.899573   16910 notify.go:220] Checking for updates...
	I0703 22:47:29.901001   16910 out.go:169] MINIKUBE_LOCATION=18998
	I0703 22:47:29.902299   16910 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:47:29.903620   16910 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 22:47:29.904807   16910 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 22:47:29.905974   16910 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 22:47:29.908261   16910 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 22:47:29.908461   16910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:47:29.940387   16910 out.go:97] Using the kvm2 driver based on user configuration
	I0703 22:47:29.940420   16910 start.go:297] selected driver: kvm2
	I0703 22:47:29.940427   16910 start.go:901] validating driver "kvm2" against <nil>
	I0703 22:47:29.940776   16910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:29.940887   16910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9396/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:47:29.956215   16910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:47:29.956297   16910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 22:47:29.957151   16910 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0703 22:47:29.957362   16910 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 22:47:29.957444   16910 cni.go:84] Creating CNI manager for ""
	I0703 22:47:29.957464   16910 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0703 22:47:29.957474   16910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 22:47:29.957545   16910 start.go:340] cluster config:
	{Name:download-only-240360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-240360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:47:29.957694   16910 iso.go:125] acquiring lock: {Name:mkffc890db9547cc7a0d480624a5e119b2686d5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:47:29.959340   16910 out.go:97] Starting "download-only-240360" primary control-plane node in "download-only-240360" cluster
	I0703 22:47:29.959359   16910 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:47:30.078604   16910 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0703 22:47:30.078693   16910 cache.go:56] Caching tarball of preloaded images
	I0703 22:47:30.078898   16910 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0703 22:47:30.080770   16910 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0703 22:47:30.080799   16910 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0703 22:47:30.204859   16910 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/18998-9396/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-240360 host does not exist
	  To start a cluster, run: "minikube start -p download-only-240360"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-240360
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0703 22:47:43.596323   16574 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-921043 --alsologtostderr --binary-mirror http://127.0.0.1:39145 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-921043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-921043
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (122.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-517894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-517894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m1.47453808s)
helpers_test.go:175: Cleaning up "offline-crio-517894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-517894
--- PASS: TestOffline (122.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-224553
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-224553: exit status 85 (47.422319ms)

                                                
                                                
-- stdout --
	* Profile "addons-224553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-224553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-224553
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-224553: exit status 85 (47.10029ms)

                                                
                                                
-- stdout --
	* Profile "addons-224553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-224553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-224553 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-224553 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.840110982s)
--- PASS: TestAddons/Setup (212.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 14.973485ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-p9skr" [d68fdfd4-7879-4930-8113-149c5c04b06a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008392564s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zj8bk" [2cccffc8-167d-483e-81c9-bcb8a862200f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004529222s
addons_test.go:342: (dbg) Run:  kubectl --context addons-224553 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-224553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-224553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.193768077s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 ip
2024/07/03 22:51:37 [DEBUG] GET http://192.168.39.226:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dkgbn" [1380eb8f-4d35-4fa5-9735-3d88744cf719] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005155352s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-224553
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-224553: (6.038883283s)
--- PASS: TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.456589ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-4g4h4" [2a14a1e3-ef96-40b2-b4ba-2790881ec44c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007348038s
addons_test.go:475: (dbg) Run:  kubectl --context addons-224553 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-224553 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.12808574s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0703 22:52:07.669917   16574 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0703 22:52:07.675437   16574 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0703 22:52:07.675462   16574 kapi.go:107] duration metric: took 5.551764ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.56036ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-224553 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-224553 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a12564f7-e50e-47d9-974c-62da8ed491f6] Pending
helpers_test.go:344: "task-pv-pod" [a12564f7-e50e-47d9-974c-62da8ed491f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a12564f7-e50e-47d9-974c-62da8ed491f6] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004849836s
addons_test.go:586: (dbg) Run:  kubectl --context addons-224553 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-224553 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-224553 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-224553 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-224553 delete pod task-pv-pod: (1.204461218s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-224553 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-224553 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-224553 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [086043c7-1424-4cb1-a7f1-56ca7483a163] Pending
helpers_test.go:344: "task-pv-pod-restore" [086043c7-1424-4cb1-a7f1-56ca7483a163] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [086043c7-1424-4cb1-a7f1-56ca7483a163] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004771083s
addons_test.go:628: (dbg) Run:  kubectl --context addons-224553 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-224553 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-224553 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-224553 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.875921669s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-224553 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-jgcbc" [1b695b04-d5ab-420b-8a5e-b5b4d5061b10] Pending
helpers_test.go:344: "headlamp-7867546754-jgcbc" [1b695b04-d5ab-420b-8a5e-b5b4d5061b10] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-jgcbc" [1b695b04-d5ab-420b-8a5e-b5b4d5061b10] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004799952s
--- PASS: TestAddons/parallel/Headlamp (13.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-djmhq" [4144b3c0-6d10-42e8-b32a-79baa4f23f95] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004256991s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-224553
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-224553 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-224553 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [698c9906-4c9f-4487-83cf-ef76853b93cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [698c9906-4c9f-4487-83cf-ef76853b93cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [698c9906-4c9f-4487-83cf-ef76853b93cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00523395s
addons_test.go:992: (dbg) Run:  kubectl --context addons-224553 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 ssh "cat /opt/local-path-provisioner/pvc-3109b72f-6268-4949-88ee-62863ae03b8a_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-224553 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-224553 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-224553 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-224553 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.636471561s)
--- PASS: TestAddons/parallel/LocalPath (55.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sbhcl" [71040d78-0cef-4e87-863c-271f1ea0dc3f] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005151101s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-224553
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-fwg4s" [d1102c91-2165-4a2c-adbf-945b4db26c0e] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003761849s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-224553 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-224553 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (74.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-768841 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-768841 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.37654852s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-768841 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-768841 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-768841 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-768841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-768841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-768841: (1.045204859s)
--- PASS: TestCertOptions (74.89s)

                                                
                                    
x
+
TestCertExpiration (244.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-979438 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0703 23:58:40.413874   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-979438 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.703814631s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-979438 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-979438 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.626391683s)
helpers_test.go:175: Cleaning up "cert-expiration-979438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-979438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-979438: (1.037800242s)
--- PASS: TestCertExpiration (244.37s)

                                                
                                    
x
+
TestForceSystemdFlag (56.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-163167 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-163167 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.385894348s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-163167 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-163167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-163167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-163167: (1.817281256s)
--- PASS: TestForceSystemdFlag (56.41s)

                                                
                                    
x
+
TestForceSystemdEnv (50.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-175902 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-175902 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.424861569s)
helpers_test.go:175: Cleaning up "force-systemd-env-175902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-175902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-175902: (1.2419371s)
--- PASS: TestForceSystemdEnv (50.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0703 23:58:45.165578   16574 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 23:58:45.165718   16574 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0703 23:58:45.196901   16574 install.go:62] docker-machine-driver-kvm2: exit status 1
W0703 23:58:45.197292   16574 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0703 23:58:45.197362   16574 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3431246110/001/docker-machine-driver-kvm2
I0703 23:58:45.427962   16574 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3431246110/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0] Decompressors:map[bz2:0xc0004cf300 gz:0xc0004cf308 tar:0xc0004cf2b0 tar.bz2:0xc0004cf2c0 tar.gz:0xc0004cf2d0 tar.xz:0xc0004cf2e0 tar.zst:0xc0004cf2f0 tbz2:0xc0004cf2c0 tgz:0xc0004cf2d0 txz:0xc0004cf2e0 tzst:0xc0004cf2f0 xz:0xc0004cf310 zip:0xc0004cf320 zst:0xc0004cf318] Getters:map[file:0xc0016b98b0 http:0xc0000531d0 https:0xc000053220] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0703 23:58:45.428023   16574 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3431246110/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.90s)

                                                
                                    
x
+
TestErrorSpam/setup (44.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-202169 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-202169 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-202169 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-202169 --driver=kvm2  --container-runtime=crio: (44.825121709s)
--- PASS: TestErrorSpam/setup (44.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (4.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 stop: (2.322136549s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 stop: (1.346775519s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-202169 --log_dir /tmp/nospam-202169 stop
--- PASS: TestErrorSpam/stop (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18998-9396/.minikube/files/etc/test/nested/copy/16574/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0703 23:01:17.047347   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.052651   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.062945   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.083328   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.123627   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.203949   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.364427   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:17.685127   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:18.326103   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:19.606589   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:22.167826   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:27.288137   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:37.528585   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:01:58.009124   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-188799 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m33.573094182s)
--- PASS: TestFunctional/serial/StartWithProxy (93.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0703 23:02:26.854118   16574 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --alsologtostderr -v=8
E0703 23:02:38.970815   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-188799 --alsologtostderr -v=8: (41.224366259s)
functional_test.go:659: soft start took 41.225036102s for "functional-188799" cluster.
I0703 23:03:08.078741   16574 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/SoftStart (41.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-188799 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 cache add registry.k8s.io/pause:3.3: (1.128576586s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 cache add registry.k8s.io/pause:latest: (1.124257106s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-188799 /tmp/TestFunctionalserialCacheCmdcacheadd_local3244693961/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache add minikube-local-cache-test:functional-188799
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 cache add minikube-local-cache-test:functional-188799: (1.999067383s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache delete minikube-local-cache-test:functional-188799
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-188799
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.162333ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 kubectl -- --context functional-188799 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-188799 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-188799 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.511063538s)
functional_test.go:757: restart took 33.511179017s for "functional-188799" cluster.
I0703 23:03:49.467166   16574 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/ExtraConfig (33.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-188799 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 logs: (1.437725865s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 logs --file /tmp/TestFunctionalserialLogsFileCmd1235174262/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 logs --file /tmp/TestFunctionalserialLogsFileCmd1235174262/001/logs.txt: (1.479154808s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-188799 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-188799
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-188799: exit status 115 (281.62944ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.208:32242 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-188799 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 config get cpus: exit status 14 (53.259922ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 config get cpus: exit status 14 (47.640146ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (35.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-188799 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-188799 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26293: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (35.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-188799 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.760687ms)

                                                
                                                
-- stdout --
	* [functional-188799] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:04:10.681190   25850 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:04:10.681313   25850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:10.681324   25850 out.go:304] Setting ErrFile to fd 2...
	I0703 23:04:10.681331   25850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:10.681598   25850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:04:10.682174   25850 out.go:298] Setting JSON to false
	I0703 23:04:10.683132   25850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2791,"bootTime":1720045060,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:04:10.683192   25850 start.go:139] virtualization: kvm guest
	I0703 23:04:10.685433   25850 out.go:177] * [functional-188799] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:04:10.686849   25850 notify.go:220] Checking for updates...
	I0703 23:04:10.686876   25850 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:04:10.688345   25850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:04:10.689850   25850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:04:10.691071   25850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:10.692439   25850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:04:10.693700   25850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:04:10.695312   25850 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:04:10.695731   25850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:10.695779   25850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:10.711532   25850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0703 23:04:10.711980   25850 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:10.712611   25850 main.go:141] libmachine: Using API Version  1
	I0703 23:04:10.712672   25850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:10.712987   25850 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:10.713184   25850 main.go:141] libmachine: (functional-188799) Calling .DriverName
	I0703 23:04:10.713436   25850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:04:10.713726   25850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:10.713760   25850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:10.730980   25850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37615
	I0703 23:04:10.731400   25850 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:10.731969   25850 main.go:141] libmachine: Using API Version  1
	I0703 23:04:10.732002   25850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:10.732403   25850 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:10.732585   25850 main.go:141] libmachine: (functional-188799) Calling .DriverName
	I0703 23:04:10.770549   25850 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 23:04:10.772197   25850 start.go:297] selected driver: kvm2
	I0703 23:04:10.772216   25850 start.go:901] validating driver "kvm2" against &{Name:functional-188799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-188799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:04:10.772356   25850 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:04:10.774847   25850 out.go:177] 
	W0703 23:04:10.776352   25850 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0703 23:04:10.777558   25850 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-188799 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-188799 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.527492ms)

                                                
                                                
-- stdout --
	* [functional-188799] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:04:10.531586   25783 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:04:10.531764   25783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:10.531772   25783 out.go:304] Setting ErrFile to fd 2...
	I0703 23:04:10.531778   25783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:04:10.532223   25783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:04:10.532892   25783 out.go:298] Setting JSON to false
	I0703 23:04:10.534167   25783 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2790,"bootTime":1720045060,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:04:10.534246   25783 start.go:139] virtualization: kvm guest
	I0703 23:04:10.541282   25783 out.go:177] * [functional-188799] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0703 23:04:10.542696   25783 notify.go:220] Checking for updates...
	I0703 23:04:10.545717   25783 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:04:10.548037   25783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:04:10.549549   25783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:04:10.551030   25783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:04:10.552426   25783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:04:10.553693   25783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:04:10.555429   25783 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:04:10.556145   25783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:10.556316   25783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:10.579251   25783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0703 23:04:10.579691   25783 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:10.580303   25783 main.go:141] libmachine: Using API Version  1
	I0703 23:04:10.580327   25783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:10.580642   25783 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:10.580834   25783 main.go:141] libmachine: (functional-188799) Calling .DriverName
	I0703 23:04:10.581081   25783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:04:10.581490   25783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:04:10.581524   25783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:04:10.596677   25783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0703 23:04:10.597058   25783 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:04:10.597440   25783 main.go:141] libmachine: Using API Version  1
	I0703 23:04:10.597452   25783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:04:10.597769   25783 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:04:10.597921   25783 main.go:141] libmachine: (functional-188799) Calling .DriverName
	I0703 23:04:10.632256   25783 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0703 23:04:10.633654   25783 start.go:297] selected driver: kvm2
	I0703 23:04:10.633667   25783 start.go:901] validating driver "kvm2" against &{Name:functional-188799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-188799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:04:10.633781   25783 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:04:10.636069   25783 out.go:177] 
	W0703 23:04:10.637353   25783 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0703 23:04:10.638541   25783 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-188799 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-188799 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-zghm4" [b92e25ae-5405-4f97-ad23-10387dab32f4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-zghm4" [b92e25ae-5405-4f97-ad23-10387dab32f4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004210063s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.208:30423
functional_test.go:1671: http://192.168.39.208:30423: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-zghm4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.208:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.208:30423
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [00f2451c-7736-488d-85ad-6f31f14e8d06] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004206585s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-188799 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-188799 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-188799 get pvc myclaim -o=json
I0703 23:04:03.589056   16574 retry.go:31] will retry after 1.904644046s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:bbad17a0-48f3-41a0-9234-55d213421e76 ResourceVersion:730 Generation:0 CreationTimestamp:2024-07-03 23:04:03 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-bbad17a0-48f3-41a0-9234-55d213421e76 StorageClassName:0xc001678fd0 VolumeMode:0xc001678fe0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-188799 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-188799 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff46de7f-7ec0-4c62-b78a-3b5110cb63ef] Pending
helpers_test.go:344: "sp-pod" [ff46de7f-7ec0-4c62-b78a-3b5110cb63ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff46de7f-7ec0-4c62-b78a-3b5110cb63ef] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003720342s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-188799 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-188799 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-188799 delete -f testdata/storage-provisioner/pod.yaml: (1.85536759s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-188799 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [12ba71b8-6953-47b6-a437-80822dd21ca5] Pending
helpers_test.go:344: "sp-pod" [12ba71b8-6953-47b6-a437-80822dd21ca5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [12ba71b8-6953-47b6-a437-80822dd21ca5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.008136246s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-188799 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh -n functional-188799 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cp functional-188799:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3182289983/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh -n functional-188799 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh -n functional-188799 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-188799 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-k7nfw" [2ee8e62e-6ddf-41c7-ae42-ac146b15d69d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-k7nfw" [2ee8e62e-6ddf-41c7-ae42-ac146b15d69d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004733789s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188799 exec mysql-64454c8b5c-k7nfw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-188799 exec mysql-64454c8b5c-k7nfw -- mysql -ppassword -e "show databases;": exit status 1 (337.106468ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 23:04:36.677526   16574 retry.go:31] will retry after 569.804181ms: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188799 exec mysql-64454c8b5c-k7nfw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16574/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /etc/test/nested/copy/16574/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16574.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /etc/ssl/certs/16574.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16574.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /usr/share/ca-certificates/16574.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/165742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /etc/ssl/certs/165742.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/165742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /usr/share/ca-certificates/165742.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-188799 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "sudo systemctl is-active docker": exit status 1 (223.051955ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "sudo systemctl is-active containerd": exit status 1 (213.027205ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-188799 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-188799 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-47srk" [7273382f-f5f4-475e-bd14-c2822b8bb748] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-47srk" [7273382f-f5f4-475e-bd14-c2822b8bb748] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00667883s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "388.518741ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "47.009245ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "374.481056ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.091991ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdany-port4068152680/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720047839397914299" to /tmp/TestFunctionalparallelMountCmdany-port4068152680/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720047839397914299" to /tmp/TestFunctionalparallelMountCmdany-port4068152680/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720047839397914299" to /tmp/TestFunctionalparallelMountCmdany-port4068152680/001/test-1720047839397914299
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.141022ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 23:03:59.638344   16574 retry.go:31] will retry after 548.377547ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  3 23:03 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  3 23:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  3 23:03 test-1720047839397914299
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh cat /mount-9p/test-1720047839397914299
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-188799 replace --force -f testdata/busybox-mount-test.yaml
E0703 23:04:00.891220   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e54a1a14-184b-44b6-8f22-034e1ec08e83] Pending
helpers_test.go:344: "busybox-mount" [e54a1a14-184b-44b6-8f22-034e1ec08e83] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e54a1a14-184b-44b6-8f22-034e1ec08e83] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e54a1a14-184b-44b6-8f22-034e1ec08e83] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004522823s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-188799 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdany-port4068152680/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdspecific-port2036825723/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.998938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 23:04:08.120153   16574 retry.go:31] will retry after 426.777317ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdspecific-port2036825723/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "sudo umount -f /mount-9p": exit status 1 (273.090287ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-188799 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdspecific-port2036825723/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service list -o json
functional_test.go:1490: Took "390.144795ms" to run "out/minikube-linux-amd64 -p functional-188799 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.208:32100
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T" /mount1: exit status 1 (356.921516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 23:04:10.090290   16574 retry.go:31] will retry after 353.75622ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-188799 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-188799 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3465084243/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.208:32100
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-188799 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-188799
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-188799
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-188799 image ls --format short --alsologtostderr:
I0703 23:04:38.190178   26908 out.go:291] Setting OutFile to fd 1 ...
I0703 23:04:38.190325   26908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.190336   26908 out.go:304] Setting ErrFile to fd 2...
I0703 23:04:38.190342   26908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.190611   26908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
I0703 23:04:38.191401   26908 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.191558   26908 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.192095   26908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.192159   26908 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.206880   26908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
I0703 23:04:38.207369   26908 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.207978   26908 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.207997   26908 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.208289   26908 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.208474   26908 main.go:141] libmachine: (functional-188799) Calling .GetState
I0703 23:04:38.210370   26908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.210414   26908 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.225260   26908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
I0703 23:04:38.225749   26908 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.226307   26908 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.226335   26908 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.226668   26908 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.226890   26908 main.go:141] libmachine: (functional-188799) Calling .DriverName
I0703 23:04:38.227128   26908 ssh_runner.go:195] Run: systemctl --version
I0703 23:04:38.227158   26908 main.go:141] libmachine: (functional-188799) Calling .GetSSHHostname
I0703 23:04:38.230458   26908 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.230917   26908 main.go:141] libmachine: (functional-188799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:e2:6e", ip: ""} in network mk-functional-188799: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:07 +0000 UTC Type:0 Mac:52:54:00:48:e2:6e Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-188799 Clientid:01:52:54:00:48:e2:6e}
I0703 23:04:38.230961   26908 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined IP address 192.168.39.208 and MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.231225   26908 main.go:141] libmachine: (functional-188799) Calling .GetSSHPort
I0703 23:04:38.231428   26908 main.go:141] libmachine: (functional-188799) Calling .GetSSHKeyPath
I0703 23:04:38.231586   26908 main.go:141] libmachine: (functional-188799) Calling .GetSSHUsername
I0703 23:04:38.231716   26908 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/functional-188799/id_rsa Username:docker}
I0703 23:04:38.376248   26908 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 23:04:38.484233   26908 main.go:141] libmachine: Making call to close driver server
I0703 23:04:38.484249   26908 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:38.484533   26908 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
I0703 23:04:38.484563   26908 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:38.484588   26908 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:38.484603   26908 main.go:141] libmachine: Making call to close driver server
I0703 23:04:38.484613   26908 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:38.484846   26908 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:38.484861   26908 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-188799 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-188799  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-188799  | 8a1a86f467198 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-188799 image ls --format table --alsologtostderr:
I0703 23:04:39.091727   27032 out.go:291] Setting OutFile to fd 1 ...
I0703 23:04:39.091900   27032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:39.091911   27032 out.go:304] Setting ErrFile to fd 2...
I0703 23:04:39.091918   27032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:39.092183   27032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
I0703 23:04:39.092987   27032 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:39.093138   27032 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:39.093709   27032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:39.093777   27032 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:39.110333   27032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
I0703 23:04:39.110808   27032 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:39.111402   27032 main.go:141] libmachine: Using API Version  1
I0703 23:04:39.111445   27032 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:39.111832   27032 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:39.112063   27032 main.go:141] libmachine: (functional-188799) Calling .GetState
I0703 23:04:39.113987   27032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:39.114024   27032 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:39.129044   27032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
I0703 23:04:39.129486   27032 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:39.129953   27032 main.go:141] libmachine: Using API Version  1
I0703 23:04:39.129975   27032 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:39.130384   27032 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:39.130607   27032 main.go:141] libmachine: (functional-188799) Calling .DriverName
I0703 23:04:39.130813   27032 ssh_runner.go:195] Run: systemctl --version
I0703 23:04:39.130836   27032 main.go:141] libmachine: (functional-188799) Calling .GetSSHHostname
I0703 23:04:39.134136   27032 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:39.134532   27032 main.go:141] libmachine: (functional-188799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:e2:6e", ip: ""} in network mk-functional-188799: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:07 +0000 UTC Type:0 Mac:52:54:00:48:e2:6e Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-188799 Clientid:01:52:54:00:48:e2:6e}
I0703 23:04:39.134573   27032 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined IP address 192.168.39.208 and MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:39.134790   27032 main.go:141] libmachine: (functional-188799) Calling .GetSSHPort
I0703 23:04:39.134975   27032 main.go:141] libmachine: (functional-188799) Calling .GetSSHKeyPath
I0703 23:04:39.135139   27032 main.go:141] libmachine: (functional-188799) Calling .GetSSHUsername
I0703 23:04:39.135268   27032 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/functional-188799/id_rsa Username:docker}
I0703 23:04:39.275956   27032 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 23:04:39.397573   27032 main.go:141] libmachine: Making call to close driver server
I0703 23:04:39.397595   27032 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:39.397854   27032 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:39.397870   27032 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:39.397888   27032 main.go:141] libmachine: Making call to close driver server
I0703 23:04:39.397894   27032 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:39.398110   27032 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:39.398125   27032 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:39.398177   27032 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-188799 image ls --format json --alsologtostderr:
[{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725e
cb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8a1a86f467198a1127607c408b25351da1ec5e7530ff5da9054680105365acee","repoDigests":["localhost/minikube-local-cache-test@sha256:3de81e634c2882be021b4879e56523a27441b02a08a960
597ecd571dcd96ef7e"],"repoTags":["localhost/minikube-local-cache-test:functional-188799"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["re
gistry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-188799"],"size":"34114467"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9
da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5
a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480
cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-188799 image ls --format json --alsologtostderr:
I0703 23:04:38.817545   26984 out.go:291] Setting OutFile to fd 1 ...
I0703 23:04:38.817801   26984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.817811   26984 out.go:304] Setting ErrFile to fd 2...
I0703 23:04:38.817815   26984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.817993   26984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
I0703 23:04:38.818542   26984 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.818633   26984 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.818989   26984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.819036   26984 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.834871   26984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
I0703 23:04:38.835360   26984 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.835965   26984 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.835990   26984 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.836327   26984 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.836504   26984 main.go:141] libmachine: (functional-188799) Calling .GetState
I0703 23:04:38.838695   26984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.838752   26984 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.854192   26984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
I0703 23:04:38.854657   26984 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.855144   26984 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.855170   26984 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.855491   26984 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.855661   26984 main.go:141] libmachine: (functional-188799) Calling .DriverName
I0703 23:04:38.855846   26984 ssh_runner.go:195] Run: systemctl --version
I0703 23:04:38.855867   26984 main.go:141] libmachine: (functional-188799) Calling .GetSSHHostname
I0703 23:04:38.858524   26984 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.858927   26984 main.go:141] libmachine: (functional-188799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:e2:6e", ip: ""} in network mk-functional-188799: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:07 +0000 UTC Type:0 Mac:52:54:00:48:e2:6e Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-188799 Clientid:01:52:54:00:48:e2:6e}
I0703 23:04:38.858961   26984 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined IP address 192.168.39.208 and MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.859088   26984 main.go:141] libmachine: (functional-188799) Calling .GetSSHPort
I0703 23:04:38.859256   26984 main.go:141] libmachine: (functional-188799) Calling .GetSSHKeyPath
I0703 23:04:38.859421   26984 main.go:141] libmachine: (functional-188799) Calling .GetSSHUsername
I0703 23:04:38.859599   26984 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/functional-188799/id_rsa Username:docker}
I0703 23:04:38.954696   26984 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 23:04:39.028298   26984 main.go:141] libmachine: Making call to close driver server
I0703 23:04:39.028319   26984 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:39.028578   26984 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:39.028595   26984 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:39.028603   26984 main.go:141] libmachine: Making call to close driver server
I0703 23:04:39.028610   26984 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:39.028882   26984 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
I0703 23:04:39.028889   26984 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:39.028902   26984 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-188799 image ls --format yaml --alsologtostderr:
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 8a1a86f467198a1127607c408b25351da1ec5e7530ff5da9054680105365acee
repoDigests:
- localhost/minikube-local-cache-test@sha256:3de81e634c2882be021b4879e56523a27441b02a08a960597ecd571dcd96ef7e
repoTags:
- localhost/minikube-local-cache-test:functional-188799
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-188799
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-188799 image ls --format yaml --alsologtostderr:
I0703 23:04:38.540731   26932 out.go:291] Setting OutFile to fd 1 ...
I0703 23:04:38.540829   26932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.540833   26932 out.go:304] Setting ErrFile to fd 2...
I0703 23:04:38.540838   26932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.541023   26932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
I0703 23:04:38.541554   26932 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.541647   26932 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.542001   26932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.542057   26932 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.556728   26932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35495
I0703 23:04:38.557222   26932 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.557873   26932 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.557897   26932 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.558274   26932 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.558495   26932 main.go:141] libmachine: (functional-188799) Calling .GetState
I0703 23:04:38.560682   26932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.560726   26932 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.576979   26932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
I0703 23:04:38.577457   26932 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.578039   26932 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.578061   26932 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.578470   26932 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.578694   26932 main.go:141] libmachine: (functional-188799) Calling .DriverName
I0703 23:04:38.578922   26932 ssh_runner.go:195] Run: systemctl --version
I0703 23:04:38.578950   26932 main.go:141] libmachine: (functional-188799) Calling .GetSSHHostname
I0703 23:04:38.582036   26932 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.582517   26932 main.go:141] libmachine: (functional-188799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:e2:6e", ip: ""} in network mk-functional-188799: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:07 +0000 UTC Type:0 Mac:52:54:00:48:e2:6e Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-188799 Clientid:01:52:54:00:48:e2:6e}
I0703 23:04:38.582545   26932 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined IP address 192.168.39.208 and MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.582699   26932 main.go:141] libmachine: (functional-188799) Calling .GetSSHPort
I0703 23:04:38.582858   26932 main.go:141] libmachine: (functional-188799) Calling .GetSSHKeyPath
I0703 23:04:38.583012   26932 main.go:141] libmachine: (functional-188799) Calling .GetSSHUsername
I0703 23:04:38.583156   26932 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/functional-188799/id_rsa Username:docker}
I0703 23:04:38.699643   26932 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 23:04:38.764164   26932 main.go:141] libmachine: Making call to close driver server
I0703 23:04:38.764181   26932 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:38.764463   26932 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:38.764485   26932 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:38.764492   26932 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
I0703 23:04:38.764501   26932 main.go:141] libmachine: Making call to close driver server
I0703 23:04:38.764510   26932 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:38.764714   26932 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:38.764729   26932 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-188799 ssh pgrep buildkitd: exit status 1 (238.007038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image build -t localhost/my-image:functional-188799 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image build -t localhost/my-image:functional-188799 testdata/build --alsologtostderr: (5.466423499s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-188799 image build -t localhost/my-image:functional-188799 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c830b32780b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-188799
--> 6065fe72a54
Successfully tagged localhost/my-image:functional-188799
6065fe72a54d45efaf9cfd7064828528d4a843ff969655815ffb7ecf210d73c3
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-188799 image build -t localhost/my-image:functional-188799 testdata/build --alsologtostderr:
I0703 23:04:38.924121   27008 out.go:291] Setting OutFile to fd 1 ...
I0703 23:04:38.924285   27008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.924297   27008 out.go:304] Setting ErrFile to fd 2...
I0703 23:04:38.924304   27008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:04:38.924526   27008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
I0703 23:04:38.925148   27008 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.925686   27008 config.go:182] Loaded profile config "functional-188799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0703 23:04:38.926049   27008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.926097   27008 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.941292   27008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
I0703 23:04:38.941775   27008 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.942323   27008 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.942366   27008 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.942813   27008 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.943005   27008 main.go:141] libmachine: (functional-188799) Calling .GetState
I0703 23:04:38.944952   27008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0703 23:04:38.944988   27008 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 23:04:38.959468   27008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
I0703 23:04:38.959936   27008 main.go:141] libmachine: () Calling .GetVersion
I0703 23:04:38.960467   27008 main.go:141] libmachine: Using API Version  1
I0703 23:04:38.960489   27008 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 23:04:38.960772   27008 main.go:141] libmachine: () Calling .GetMachineName
I0703 23:04:38.960988   27008 main.go:141] libmachine: (functional-188799) Calling .DriverName
I0703 23:04:38.961180   27008 ssh_runner.go:195] Run: systemctl --version
I0703 23:04:38.961199   27008 main.go:141] libmachine: (functional-188799) Calling .GetSSHHostname
I0703 23:04:38.964330   27008 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.964837   27008 main.go:141] libmachine: (functional-188799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:e2:6e", ip: ""} in network mk-functional-188799: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:07 +0000 UTC Type:0 Mac:52:54:00:48:e2:6e Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-188799 Clientid:01:52:54:00:48:e2:6e}
I0703 23:04:38.964883   27008 main.go:141] libmachine: (functional-188799) DBG | domain functional-188799 has defined IP address 192.168.39.208 and MAC address 52:54:00:48:e2:6e in network mk-functional-188799
I0703 23:04:38.965033   27008 main.go:141] libmachine: (functional-188799) Calling .GetSSHPort
I0703 23:04:38.965216   27008 main.go:141] libmachine: (functional-188799) Calling .GetSSHKeyPath
I0703 23:04:38.965406   27008 main.go:141] libmachine: (functional-188799) Calling .GetSSHUsername
I0703 23:04:38.965579   27008 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/functional-188799/id_rsa Username:docker}
I0703 23:04:39.102406   27008 build_images.go:161] Building image from path: /tmp/build.1589668756.tar
I0703 23:04:39.102485   27008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0703 23:04:39.126268   27008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1589668756.tar
I0703 23:04:39.132707   27008 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1589668756.tar: stat -c "%s %y" /var/lib/minikube/build/build.1589668756.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1589668756.tar': No such file or directory
I0703 23:04:39.132766   27008 ssh_runner.go:362] scp /tmp/build.1589668756.tar --> /var/lib/minikube/build/build.1589668756.tar (3072 bytes)
I0703 23:04:39.181446   27008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1589668756
I0703 23:04:39.206293   27008 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1589668756 -xf /var/lib/minikube/build/build.1589668756.tar
I0703 23:04:39.250267   27008 crio.go:315] Building image: /var/lib/minikube/build/build.1589668756
I0703 23:04:39.250347   27008 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-188799 /var/lib/minikube/build/build.1589668756 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0703 23:04:44.324819   27008 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-188799 /var/lib/minikube/build/build.1589668756 --cgroup-manager=cgroupfs: (5.074443374s)
I0703 23:04:44.324908   27008 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1589668756
I0703 23:04:44.335737   27008 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1589668756.tar
I0703 23:04:44.345675   27008 build_images.go:217] Built localhost/my-image:functional-188799 from /tmp/build.1589668756.tar
I0703 23:04:44.345711   27008 build_images.go:133] succeeded building to: functional-188799
I0703 23:04:44.345716   27008 build_images.go:134] failed building to: 
I0703 23:04:44.345742   27008 main.go:141] libmachine: Making call to close driver server
I0703 23:04:44.345754   27008 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:44.346102   27008 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
I0703 23:04:44.346169   27008 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:44.346189   27008 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 23:04:44.346202   27008 main.go:141] libmachine: Making call to close driver server
I0703 23:04:44.346213   27008 main.go:141] libmachine: (functional-188799) Calling .Close
I0703 23:04:44.346500   27008 main.go:141] libmachine: Successfully made call to close driver server
I0703 23:04:44.346502   27008 main.go:141] libmachine: (functional-188799) DBG | Closing plugin on server side
I0703 23:04:44.346520   27008 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
2024/07/03 23:04:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.446127818s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-188799
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr: (4.113555303s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr: (5.029024697s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.420356246s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-188799
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image load --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr: (4.678186349s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image save gcr.io/google-containers/addon-resizer:functional-188799 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image save gcr.io/google-containers/addon-resizer:functional-188799 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.757758547s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image rm gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.844009385s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-188799
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-188799 image save --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-188799 image save --daemon gcr.io/google-containers/addon-resizer:functional-188799 --alsologtostderr: (2.001167027s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-188799
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-188799
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-188799
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-188799
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (214.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-856893 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0703 23:06:17.050768   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:06:44.738119   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-856893 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m33.719719421s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (214.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-856893 -- rollout status deployment/busybox: (5.433009256s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-bt646 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-hh5rx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-n7rvj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-bt646 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-hh5rx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-n7rvj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-bt646 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-hh5rx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-n7rvj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-bt646 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-bt646 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-hh5rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-hh5rx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-n7rvj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-856893 -- exec busybox-fc5497c4f-n7rvj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-856893 -v=7 --alsologtostderr
E0703 23:08:57.357354   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.362650   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.373000   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.393324   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.433637   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.513982   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.674413   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:57.995016   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:58.636070   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:08:59.916970   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:09:02.477310   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:09:07.598309   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:09:17.838715   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-856893 -v=7 --alsologtostderr: (48.396326335s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-856893 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp testdata/cp-test.txt ha-856893:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893:/home/docker/cp-test.txt ha-856893-m02:/home/docker/cp-test_ha-856893_ha-856893-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test_ha-856893_ha-856893-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893:/home/docker/cp-test.txt ha-856893-m03:/home/docker/cp-test_ha-856893_ha-856893-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test_ha-856893_ha-856893-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893:/home/docker/cp-test.txt ha-856893-m04:/home/docker/cp-test_ha-856893_ha-856893-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test_ha-856893_ha-856893-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp testdata/cp-test.txt ha-856893-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m02:/home/docker/cp-test.txt ha-856893:/home/docker/cp-test_ha-856893-m02_ha-856893.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test_ha-856893-m02_ha-856893.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m02:/home/docker/cp-test.txt ha-856893-m03:/home/docker/cp-test_ha-856893-m02_ha-856893-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test_ha-856893-m02_ha-856893-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m02:/home/docker/cp-test.txt ha-856893-m04:/home/docker/cp-test_ha-856893-m02_ha-856893-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test_ha-856893-m02_ha-856893-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp testdata/cp-test.txt ha-856893-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt ha-856893:/home/docker/cp-test_ha-856893-m03_ha-856893.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test_ha-856893-m03_ha-856893.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt ha-856893-m02:/home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test_ha-856893-m03_ha-856893-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m03:/home/docker/cp-test.txt ha-856893-m04:/home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test_ha-856893-m03_ha-856893-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp testdata/cp-test.txt ha-856893-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile600531838/001/cp-test_ha-856893-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt ha-856893:/home/docker/cp-test_ha-856893-m04_ha-856893.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893 "sudo cat /home/docker/cp-test_ha-856893-m04_ha-856893.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt ha-856893-m02:/home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m02 "sudo cat /home/docker/cp-test_ha-856893-m04_ha-856893-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 cp ha-856893-m04:/home/docker/cp-test.txt ha-856893-m03:/home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 ssh -n ha-856893-m03 "sudo cat /home/docker/cp-test_ha-856893-m04_ha-856893-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.057470449s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-856893 node delete m03 -v=7 --alsologtostderr: (16.485894024s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-856893 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0703 23:21:17.052181   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0703 23:23:57.357285   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:25:20.408316   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0703 23:26:17.046681   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-856893 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m49.127899997s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-856893 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-856893 --control-plane -v=7 --alsologtostderr: (1m11.278774701s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-856893 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-305032 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0703 23:28:57.357338   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-305032 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.58417641s)
--- PASS: TestJSONOutput/start/Command (58.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-305032 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-305032 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-305032 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-305032 --output=json --user=testUser: (7.383686974s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-944087 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-944087 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.088774ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b55bd05-7e4e-407a-9a5a-ab4fe0700382","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-944087] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceae10e9-6c6f-4a51-9393-c6a809f01a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18998"}}
	{"specversion":"1.0","id":"8773ef1f-954d-46d0-bfbd-315f17845d1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3c0b9b3-9103-4933-bcc2-cdee56aa8956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig"}}
	{"specversion":"1.0","id":"350ba763-7711-4a62-b1c0-04e04fc1ae19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube"}}
	{"specversion":"1.0","id":"a0d9ea0f-58b5-4891-9da9-c9b4c924f5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d085f10d-343a-4451-aada-9286dbe9b942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"219a0e71-8cde-462e-ba90-254d3ba9fa92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-944087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-944087
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (98.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-316354 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-316354 --driver=kvm2  --container-runtime=crio: (49.920329993s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-319153 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-319153 --driver=kvm2  --container-runtime=crio: (45.682792712s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-316354
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-319153
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-319153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-319153
helpers_test.go:175: Cleaning up "first-316354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-316354
--- PASS: TestMinikubeProfile (98.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-119372 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0703 23:31:17.052232   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-119372 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.440679962s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-119372 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-119372 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-136483 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-136483 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.748782779s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-119372 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-136483
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-136483: (1.276203709s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-136483
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-136483: (22.062751325s)
--- PASS: TestMountStart/serial/RestartStopped (23.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136483 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-184661 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0703 23:33:57.358055   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-184661 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.733539996s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-184661 -- rollout status deployment/busybox: (4.06793574s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-27skp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-vxz7l -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-27skp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-vxz7l -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-27skp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-vxz7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-27skp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-27skp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-vxz7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-184661 -- exec busybox-fc5497c4f-vxz7l -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-184661 -v 3 --alsologtostderr
E0703 23:34:20.099251   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-184661 -v 3 --alsologtostderr: (41.787812006s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-184661 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp testdata/cp-test.txt multinode-184661:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661:/home/docker/cp-test.txt multinode-184661-m02:/home/docker/cp-test_multinode-184661_multinode-184661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test_multinode-184661_multinode-184661-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661:/home/docker/cp-test.txt multinode-184661-m03:/home/docker/cp-test_multinode-184661_multinode-184661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test_multinode-184661_multinode-184661-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp testdata/cp-test.txt multinode-184661-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt multinode-184661:/home/docker/cp-test_multinode-184661-m02_multinode-184661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test_multinode-184661-m02_multinode-184661.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m02:/home/docker/cp-test.txt multinode-184661-m03:/home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test_multinode-184661-m02_multinode-184661-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp testdata/cp-test.txt multinode-184661-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2216583350/001/cp-test_multinode-184661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt multinode-184661:/home/docker/cp-test_multinode-184661-m03_multinode-184661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661 "sudo cat /home/docker/cp-test_multinode-184661-m03_multinode-184661.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 cp multinode-184661-m03:/home/docker/cp-test.txt multinode-184661-m02:/home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 ssh -n multinode-184661-m02 "sudo cat /home/docker/cp-test_multinode-184661-m03_multinode-184661-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 node stop m03: (1.428942339s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-184661 status: exit status 7 (420.169225ms)

                                                
                                                
-- stdout --
	multinode-184661
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-184661-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-184661-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr: exit status 7 (434.170362ms)

                                                
                                                
-- stdout --
	multinode-184661
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-184661-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-184661-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:35:01.464531   44279 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:35:01.464802   44279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:35:01.464812   44279 out.go:304] Setting ErrFile to fd 2...
	I0703 23:35:01.464816   44279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:35:01.465056   44279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:35:01.465266   44279 out.go:298] Setting JSON to false
	I0703 23:35:01.465292   44279 mustload.go:65] Loading cluster: multinode-184661
	I0703 23:35:01.465396   44279 notify.go:220] Checking for updates...
	I0703 23:35:01.465732   44279 config.go:182] Loaded profile config "multinode-184661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:35:01.465753   44279 status.go:174] checking status of multinode-184661 ...
	I0703 23:35:01.466139   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.466197   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.481576   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0703 23:35:01.482072   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.482609   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.482629   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.482974   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.483164   44279 main.go:141] libmachine: (multinode-184661) Calling .GetState
	I0703 23:35:01.484754   44279 status.go:364] multinode-184661 host status = "Running" (err=<nil>)
	I0703 23:35:01.484771   44279 host.go:66] Checking if "multinode-184661" exists ...
	I0703 23:35:01.485063   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.485107   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.500267   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42373
	I0703 23:35:01.500744   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.501233   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.501263   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.501597   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.501775   44279 main.go:141] libmachine: (multinode-184661) Calling .GetIP
	I0703 23:35:01.504524   44279 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:35:01.504900   44279 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:35:01.504934   44279 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:35:01.505079   44279 host.go:66] Checking if "multinode-184661" exists ...
	I0703 23:35:01.505373   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.505423   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.520601   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41155
	I0703 23:35:01.521049   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.521567   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.521598   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.521910   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.522074   44279 main.go:141] libmachine: (multinode-184661) Calling .DriverName
	I0703 23:35:01.522263   44279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:35:01.522305   44279 main.go:141] libmachine: (multinode-184661) Calling .GetSSHHostname
	I0703 23:35:01.525116   44279 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:35:01.525626   44279 main.go:141] libmachine: (multinode-184661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:51:89", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:32:40 +0000 UTC Type:0 Mac:52:54:00:41:51:89 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-184661 Clientid:01:52:54:00:41:51:89}
	I0703 23:35:01.525657   44279 main.go:141] libmachine: (multinode-184661) DBG | domain multinode-184661 has defined IP address 192.168.39.57 and MAC address 52:54:00:41:51:89 in network mk-multinode-184661
	I0703 23:35:01.525818   44279 main.go:141] libmachine: (multinode-184661) Calling .GetSSHPort
	I0703 23:35:01.526037   44279 main.go:141] libmachine: (multinode-184661) Calling .GetSSHKeyPath
	I0703 23:35:01.526182   44279 main.go:141] libmachine: (multinode-184661) Calling .GetSSHUsername
	I0703 23:35:01.526298   44279 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661/id_rsa Username:docker}
	I0703 23:35:01.608179   44279 ssh_runner.go:195] Run: systemctl --version
	I0703 23:35:01.615218   44279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:35:01.634801   44279 kubeconfig.go:125] found "multinode-184661" server: "https://192.168.39.57:8443"
	I0703 23:35:01.634842   44279 api_server.go:166] Checking apiserver status ...
	I0703 23:35:01.634885   44279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:35:01.652837   44279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup
	W0703 23:35:01.666111   44279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:35:01.666169   44279 ssh_runner.go:195] Run: ls
	I0703 23:35:01.672394   44279 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0703 23:35:01.676715   44279 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0703 23:35:01.676744   44279 status.go:456] multinode-184661 apiserver status = Running (err=<nil>)
	I0703 23:35:01.676755   44279 status.go:176] multinode-184661 status: &{Name:multinode-184661 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:35:01.676777   44279 status.go:174] checking status of multinode-184661-m02 ...
	I0703 23:35:01.677083   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.677128   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.692646   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I0703 23:35:01.693045   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.693535   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.693559   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.693897   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.694087   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetState
	I0703 23:35:01.695588   44279 status.go:364] multinode-184661-m02 host status = "Running" (err=<nil>)
	I0703 23:35:01.695603   44279 host.go:66] Checking if "multinode-184661-m02" exists ...
	I0703 23:35:01.695938   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.696020   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.711677   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0703 23:35:01.712141   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.712644   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.712664   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.712959   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.713217   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetIP
	I0703 23:35:01.715935   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | domain multinode-184661-m02 has defined MAC address 52:54:00:ae:1c:a1 in network mk-multinode-184661
	I0703 23:35:01.716339   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:1c:a1", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:33:39 +0000 UTC Type:0 Mac:52:54:00:ae:1c:a1 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-184661-m02 Clientid:01:52:54:00:ae:1c:a1}
	I0703 23:35:01.716366   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | domain multinode-184661-m02 has defined IP address 192.168.39.115 and MAC address 52:54:00:ae:1c:a1 in network mk-multinode-184661
	I0703 23:35:01.716553   44279 host.go:66] Checking if "multinode-184661-m02" exists ...
	I0703 23:35:01.716986   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.717030   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.733586   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I0703 23:35:01.734009   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.734526   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.734546   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.734831   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.735011   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .DriverName
	I0703 23:35:01.735184   44279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:35:01.735206   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetSSHHostname
	I0703 23:35:01.737988   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | domain multinode-184661-m02 has defined MAC address 52:54:00:ae:1c:a1 in network mk-multinode-184661
	I0703 23:35:01.738380   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:1c:a1", ip: ""} in network mk-multinode-184661: {Iface:virbr1 ExpiryTime:2024-07-04 00:33:39 +0000 UTC Type:0 Mac:52:54:00:ae:1c:a1 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-184661-m02 Clientid:01:52:54:00:ae:1c:a1}
	I0703 23:35:01.738417   44279 main.go:141] libmachine: (multinode-184661-m02) DBG | domain multinode-184661-m02 has defined IP address 192.168.39.115 and MAC address 52:54:00:ae:1c:a1 in network mk-multinode-184661
	I0703 23:35:01.738590   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetSSHPort
	I0703 23:35:01.738766   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetSSHKeyPath
	I0703 23:35:01.738927   44279 main.go:141] libmachine: (multinode-184661-m02) Calling .GetSSHUsername
	I0703 23:35:01.739060   44279 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9396/.minikube/machines/multinode-184661-m02/id_rsa Username:docker}
	I0703 23:35:01.819430   44279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:35:01.834991   44279 status.go:176] multinode-184661-m02 status: &{Name:multinode-184661-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:35:01.835023   44279 status.go:174] checking status of multinode-184661-m03 ...
	I0703 23:35:01.835322   44279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0703 23:35:01.835361   44279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:35:01.851581   44279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0703 23:35:01.851993   44279 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:35:01.852454   44279 main.go:141] libmachine: Using API Version  1
	I0703 23:35:01.852472   44279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:35:01.852798   44279 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:35:01.853039   44279 main.go:141] libmachine: (multinode-184661-m03) Calling .GetState
	I0703 23:35:01.854763   44279 status.go:364] multinode-184661-m03 host status = "Stopped" (err=<nil>)
	I0703 23:35:01.854781   44279 status.go:377] host is not running, skipping remaining checks
	I0703 23:35:01.854787   44279 status.go:176] multinode-184661-m03 status: &{Name:multinode-184661-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 node start m03 -v=7 --alsologtostderr: (27.990090425s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-184661 node delete m03: (1.603704058s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (171.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-184661 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0703 23:43:57.357979   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-184661 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.431527246s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-184661 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (171.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-184661
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-184661-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-184661-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.030238ms)

                                                
                                                
-- stdout --
	* [multinode-184661-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-184661-m02' is duplicated with machine name 'multinode-184661-m02' in profile 'multinode-184661'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-184661-m03 --driver=kvm2  --container-runtime=crio
E0703 23:46:17.052225   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-184661-m03 --driver=kvm2  --container-runtime=crio: (46.56028468s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-184661
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-184661: exit status 80 (214.453713ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-184661 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-184661-m03 already exists in multinode-184661-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-184661-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.86s)

                                                
                                    
x
+
TestScheduledStopUnix (114.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-854158 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-854158 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.729879972s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-854158 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-854158 -n scheduled-stop-854158
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-854158 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0703 23:53:01.843302   16574 retry.go:31] will retry after 125.675µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.844459   16574 retry.go:31] will retry after 157.929µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.845622   16574 retry.go:31] will retry after 149.982µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.846790   16574 retry.go:31] will retry after 392.144µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.847935   16574 retry.go:31] will retry after 308.247µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.849102   16574 retry.go:31] will retry after 891.771µs: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.850236   16574 retry.go:31] will retry after 1.62567ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.852449   16574 retry.go:31] will retry after 1.333708ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.854648   16574 retry.go:31] will retry after 2.781121ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.857851   16574 retry.go:31] will retry after 5.17623ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.864108   16574 retry.go:31] will retry after 6.153188ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.871366   16574 retry.go:31] will retry after 7.025174ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.878572   16574 retry.go:31] will retry after 15.848073ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.894804   16574 retry.go:31] will retry after 19.845447ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
I0703 23:53:01.915085   16574 retry.go:31] will retry after 37.811159ms: open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/scheduled-stop-854158/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-854158 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-854158 -n scheduled-stop-854158
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-854158
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-854158 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0703 23:53:57.357959   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-854158
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-854158: exit status 7 (63.966715ms)

                                                
                                                
-- stdout --
	scheduled-stop-854158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-854158 -n scheduled-stop-854158
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-854158 -n scheduled-stop-854158: exit status 7 (63.136316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-854158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-854158
--- PASS: TestScheduledStopUnix (114.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4032811529 start -p running-upgrade-594985 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4032811529 start -p running-upgrade-594985 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.863916988s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-594985 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-594985 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.552268048s)
helpers_test.go:175: Cleaning up "running-upgrade-594985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-594985
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-594985: (1.198939269s)
--- PASS: TestRunningBinaryUpgrade (221.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (73.909373ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-556519] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556519 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556519 --driver=kvm2  --container-runtime=crio: (1m36.379830597s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-556519 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.221497082s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-556519 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-556519 status -o json: exit status 2 (252.216769ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-556519","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-556519
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-556519: (1.379233782s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556519 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.12213227s)
--- PASS: TestNoKubernetes/serial/Start (27.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0703 23:56:17.047031   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1791734053 start -p stopped-upgrade-283274 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1791734053 start -p stopped-upgrade-283274 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m2.452387696s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1791734053 -p stopped-upgrade-283274 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1791734053 -p stopped-upgrade-283274 stop: (2.13643715s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-283274 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-283274 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.02404611s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-556519 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-556519 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.030989ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-556519
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-556519: (1.343647404s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-556519 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-556519 --driver=kvm2  --container-runtime=crio: (43.533928513s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-556519 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-556519 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.516566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (57.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-672261 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-672261 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (57.085784192s)
--- PASS: TestPause/serial/Start (57.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-283274
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-676605 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-676605 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.104823ms)

                                                
                                                
-- stdout --
	* [false-676605] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:58:25.756862   55754 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:58:25.756972   55754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:25.756981   55754 out.go:304] Setting ErrFile to fd 2...
	I0703 23:58:25.756986   55754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:58:25.757180   55754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9396/.minikube/bin
	I0703 23:58:25.757773   55754 out.go:298] Setting JSON to false
	I0703 23:58:25.758727   55754 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6046,"bootTime":1720045060,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:58:25.758790   55754 start.go:139] virtualization: kvm guest
	I0703 23:58:25.761001   55754 out.go:177] * [false-676605] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:58:25.762194   55754 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 23:58:25.762204   55754 notify.go:220] Checking for updates...
	I0703 23:58:25.764388   55754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:58:25.765544   55754 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9396/kubeconfig
	I0703 23:58:25.766728   55754 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9396/.minikube
	I0703 23:58:25.767811   55754 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:58:25.769003   55754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:58:25.776901   55754 config.go:182] Loaded profile config "force-systemd-env-175902": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:25.777012   55754 config.go:182] Loaded profile config "kubernetes-upgrade-652205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0703 23:58:25.777141   55754 config.go:182] Loaded profile config "pause-672261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0703 23:58:25.777243   55754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:58:25.816578   55754 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 23:58:25.817711   55754 start.go:297] selected driver: kvm2
	I0703 23:58:25.817728   55754 start.go:901] validating driver "kvm2" against <nil>
	I0703 23:58:25.817742   55754 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:58:25.819848   55754 out.go:177] 
	W0703 23:58:25.821023   55754 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0703 23:58:25.822141   55754 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-676605 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.246:8443
name: pause-672261
contexts:
- context:
cluster: pause-672261
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-672261
name: pause-672261
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-672261
user:
client-certificate: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.crt
client-key: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-676605

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676605"

                                                
                                                
----------------------- debugLogs end: false-676605 [took: 5.071329933s] --------------------------------
helpers_test.go:175: Cleaning up "false-676605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-676605
--- PASS: TestNetworkPlugins/group/false (5.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (123.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-317739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0704 00:01:17.047066   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-317739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (2m3.576797499s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (123.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-687975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-687975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m35.433324506s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-317739 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54167996-9255-4194-93da-1c463f59760f] Pending
helpers_test.go:344: "busybox" [54167996-9255-4194-93da-1c463f59760f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54167996-9255-4194-93da-1c463f59760f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004827504s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-317739 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-995404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-995404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m40.54351987s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-317739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-317739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032960802s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-317739 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-687975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8] Pending
helpers_test.go:344: "busybox" [c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c43f2a46-7ed3-4b53-9f7e-dcfcce7e17e8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004948815s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-687975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-687975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-687975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037797818s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-687975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b335daca-ead5-42d5-96a3-245d38bd2d1a] Pending
helpers_test.go:344: "busybox" [b335daca-ead5-42d5-96a3-245d38bd2d1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b335daca-ead5-42d5-96a3-245d38bd2d1a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004093911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-995404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-995404 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (697.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-317739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-317739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (11m37.51511876s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-317739 -n no-preload-317739
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (697.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (545.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-687975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0704 00:06:17.047466   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-687975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (9m5.194837778s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687975 -n embed-certs-687975
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (545.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-979033 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-979033 --alsologtostderr -v=3: (2.342005919s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-979033 -n old-k8s-version-979033: exit status 7 (64.048144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-979033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-995404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0704 00:07:40.102576   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0704 00:08:57.358074   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
E0704 00:11:17.047155   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
E0704 00:13:57.358117   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-995404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (8m34.779191982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-995404 -n default-k8s-diff-port-995404
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-791847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-791847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (59.988259788s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m35.579957383s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-791847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-791847 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-791847 --alsologtostderr -v=3: (7.329395806s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-791847 -n newest-cni-791847
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-791847 -n newest-cni-791847: exit status 7 (66.01256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-791847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-791847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0704 00:31:17.047042   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/addons-224553/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-791847 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (38.672752426s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-791847 -n newest-cni-791847
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.689681403s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-791847 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-791847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-791847 -n newest-cni-791847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-791847 -n newest-cni-791847: exit status 2 (265.288506ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-791847 -n newest-cni-791847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-791847 -n newest-cni-791847: exit status 2 (268.834085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-791847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-791847 -n newest-cni-791847
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-791847 -n newest-cni-791847
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0704 00:32:00.415222   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/functional-188799/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.771549705s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-676605 "pgrep -a kubelet"
I0704 00:32:13.460632   16574 config.go:182] Loaded profile config "auto-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-krh4b" [56f8ef6a-a99c-45c5-903e-ccf107d216ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-krh4b" [56f8ef6a-a99c-45c5-903e-ccf107d216ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004309344s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gd7gl" [ec2304e1-398a-4d70-8264-eb5200868e59] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005793224s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.650268843s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (114.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m54.711800514s)
--- PASS: TestNetworkPlugins/group/flannel/Start (114.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-676605 "pgrep -a kubelet"
I0704 00:32:47.953589   16574 config.go:182] Loaded profile config "kindnet-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lqltl" [a069b1d1-981a-4a56-bb73-98bd704da6a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0704 00:32:48.346335   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-lqltl" [a069b1d1-981a-4a56-bb73-98bd704da6a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003692282s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0704 00:32:58.586839   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (127.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0704 00:33:19.067485   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/no-preload-317739/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m7.291280255s)
--- PASS: TestNetworkPlugins/group/bridge/Start (127.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jbg67" [97662836-a9bd-4732-ba3b-d0816542ec81] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005891761s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-676605 "pgrep -a kubelet"
I0704 00:33:39.587001   16574 config.go:182] Loaded profile config "calico-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tfll2" [dfca1fb4-bcee-4aac-9be0-16c08dab5e20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tfll2" [dfca1fb4-bcee-4aac-9be0-16c08dab5e20] Running
E0704 00:33:49.505223   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.511109   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.521306   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.541616   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.581954   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.662262   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:49.822988   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:50.143313   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
E0704 00:33:50.783920   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00526906s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0704 00:33:52.064289   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0704 00:34:09.986040   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/old-k8s-version-979033/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-676605 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.43406301s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-676605 "pgrep -a kubelet"
I0704 00:34:13.756772   16574 config.go:182] Loaded profile config "custom-flannel-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-676605 replace --force -f testdata/netcat-deployment.yaml: (1.410594495s)
I0704 00:34:15.191423   16574 kapi.go:170] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-f8jt4" [4d444a74-34e9-4658-bcab-a8f726a1471b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0704 00:34:19.809465   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:19.814746   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:19.825088   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:19.845417   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:19.885744   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:19.966113   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:20.126330   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-f8jt4" [4d444a74-34e9-4658-bcab-a8f726a1471b] Running
E0704 00:34:20.446685   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:21.087653   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:22.368284   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
E0704 00:34:24.929097   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.007609194s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vm5wr" [c06d9666-b57a-49a3-8506-529a6239e88d] Running
E0704 00:34:40.291456   16574 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/default-k8s-diff-port-995404/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003744072s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-676605 "pgrep -a kubelet"
I0704 00:34:45.029965   16574 config.go:182] Loaded profile config "flannel-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cv5cd" [1c83ff34-e567-4e50-9ff3-70b3269ad4b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cv5cd" [1c83ff34-e567-4e50-9ff3-70b3269ad4b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003770655s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-676605 "pgrep -a kubelet"
I0704 00:35:22.434736   16574 config.go:182] Loaded profile config "bridge-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2pv7m" [74dc486b-3b58-4145-8c7d-c34cc6fc9bcc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2pv7m" [74dc486b-3b58-4145-8c7d-c34cc6fc9bcc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004424758s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-676605 "pgrep -a kubelet"
I0704 00:35:50.688313   16574 config.go:182] Loaded profile config "enable-default-cni-676605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-676605 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hnl8l" [a6f170df-388c-4769-8fbe-cfb9dcd6d95c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hnl8l" [a6f170df-388c-4769-8fbe-cfb9dcd6d95c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004412596s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-676605 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-676605 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.15
275 TestNetworkPlugins/group/kubenet 3.27
283 TestNetworkPlugins/group/cilium 3.53
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-029653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-029653
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-676605 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.246:8443
name: pause-672261
contexts:
- context:
cluster: pause-672261
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-672261
name: pause-672261
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-672261
user:
client-certificate: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.crt
client-key: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-676605

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676605"

                                                
                                                
----------------------- debugLogs end: kubenet-676605 [took: 3.116381782s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-676605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-676605
--- SKIP: TestNetworkPlugins/group/kubenet (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-676605 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-676605" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18998-9396/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.246:8443
name: pause-672261
contexts:
- context:
cluster: pause-672261
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-672261
name: pause-672261
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-672261
user:
client-certificate: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.crt
client-key: /home/jenkins/minikube-integration/18998-9396/.minikube/profiles/pause-672261/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-676605

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-676605" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676605"

                                                
                                                
----------------------- debugLogs end: cilium-676605 [took: 3.398434158s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-676605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-676605
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
Copied to clipboard